00:00:00.001 Started by upstream project "autotest-nightly" build number 4361 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3724 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.154 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.155 The recommended git tool is: git 00:00:00.155 using credential 00000000-0000-0000-0000-000000000002 00:00:00.157 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.201 Fetching changes from the remote Git repository 00:00:00.202 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.232 Using shallow fetch with depth 1 00:00:00.232 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.232 > git --version # timeout=10 00:00:00.261 > git --version # 'git version 2.39.2' 00:00:00.261 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.284 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.284 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.223 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.235 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.249 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.249 > git config core.sparsecheckout # timeout=10 00:00:08.262 > git read-tree -mu HEAD # timeout=10 00:00:08.278 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.295 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.295 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.422 [Pipeline] Start of Pipeline 00:00:08.441 [Pipeline] library 00:00:08.443 Loading library shm_lib@master 00:00:08.443 Library shm_lib@master is cached. Copying from home. 00:00:08.458 [Pipeline] node 00:00:08.469 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:08.470 [Pipeline] { 00:00:08.480 [Pipeline] catchError 00:00:08.482 [Pipeline] { 00:00:08.492 [Pipeline] wrap 00:00:08.500 [Pipeline] { 00:00:08.509 [Pipeline] stage 00:00:08.511 [Pipeline] { (Prologue) 00:00:08.528 [Pipeline] echo 00:00:08.530 Node: VM-host-SM38 00:00:08.536 [Pipeline] cleanWs 00:00:08.546 [WS-CLEANUP] Deleting project workspace... 00:00:08.546 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.555 [WS-CLEANUP] done 00:00:08.741 [Pipeline] setCustomBuildProperty 00:00:08.817 [Pipeline] httpRequest 00:00:09.112 [Pipeline] echo 00:00:09.114 Sorcerer 10.211.164.20 is alive 00:00:09.122 [Pipeline] retry 00:00:09.123 [Pipeline] { 00:00:09.136 [Pipeline] httpRequest 00:00:09.141 HttpMethod: GET 00:00:09.142 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.143 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.160 Response Code: HTTP/1.1 200 OK 00:00:09.161 Success: Status code 200 is in the accepted range: 200,404 00:00:09.161 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:34.295 [Pipeline] } 00:00:34.313 [Pipeline] // retry 00:00:34.321 [Pipeline] sh 00:00:34.619 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:34.638 [Pipeline] httpRequest 00:00:35.028 [Pipeline] echo 00:00:35.030 Sorcerer 10.211.164.20 is alive 00:00:35.040 [Pipeline] retry 00:00:35.041 [Pipeline] { 00:00:35.056 [Pipeline] httpRequest 00:00:35.061 HttpMethod: GET 00:00:35.062 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:35.062 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:35.079 Response Code: HTTP/1.1 200 OK 00:00:35.079 Success: Status code 200 is in the accepted range: 200,404 00:00:35.080 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:21.076 [Pipeline] } 00:01:21.094 [Pipeline] // retry 00:01:21.101 [Pipeline] sh 00:01:21.388 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:23.949 [Pipeline] sh 00:01:24.233 + git -C spdk log --oneline -n5 00:01:24.233 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:24.233 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:24.233 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:24.233 66289a6db build: use VERSION file for storing version 00:01:24.233 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:24.252 [Pipeline] writeFile 00:01:24.266 [Pipeline] sh 00:01:24.553 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:24.567 [Pipeline] sh 00:01:24.851 + cat autorun-spdk.conf 00:01:24.851 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.851 SPDK_TEST_NVME=1 00:01:24.851 SPDK_TEST_FTL=1 00:01:24.851 SPDK_TEST_ISAL=1 00:01:24.851 SPDK_RUN_ASAN=1 00:01:24.851 SPDK_RUN_UBSAN=1 00:01:24.851 SPDK_TEST_XNVME=1 00:01:24.851 SPDK_TEST_NVME_FDP=1 00:01:24.851 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:24.859 RUN_NIGHTLY=1 00:01:24.861 [Pipeline] } 00:01:24.875 [Pipeline] // stage 00:01:24.889 [Pipeline] stage 00:01:24.891 [Pipeline] { (Run VM) 00:01:24.904 [Pipeline] sh 00:01:25.188 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:25.189 + echo 'Start stage prepare_nvme.sh' 00:01:25.189 Start stage prepare_nvme.sh 00:01:25.189 + [[ -n 10 ]] 00:01:25.189 + disk_prefix=ex10 00:01:25.189 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:25.189 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:25.189 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:25.189 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.189 ++ SPDK_TEST_NVME=1 00:01:25.189 ++ SPDK_TEST_FTL=1 00:01:25.189 ++ SPDK_TEST_ISAL=1 00:01:25.189 ++ SPDK_RUN_ASAN=1 00:01:25.189 ++ SPDK_RUN_UBSAN=1 00:01:25.189 ++ SPDK_TEST_XNVME=1 00:01:25.189 ++ SPDK_TEST_NVME_FDP=1 00:01:25.189 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.189 ++ RUN_NIGHTLY=1 00:01:25.189 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:25.189 + nvme_files=() 00:01:25.189 + declare -A nvme_files 00:01:25.189 + backend_dir=/var/lib/libvirt/images/backends 00:01:25.189 + nvme_files['nvme.img']=5G 00:01:25.189 + nvme_files['nvme-cmb.img']=5G 00:01:25.189 + nvme_files['nvme-multi0.img']=4G 00:01:25.189 + nvme_files['nvme-multi1.img']=4G 00:01:25.189 + nvme_files['nvme-multi2.img']=4G 00:01:25.189 + nvme_files['nvme-openstack.img']=8G 00:01:25.189 + nvme_files['nvme-zns.img']=5G 00:01:25.189 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:25.189 + (( SPDK_TEST_FTL == 1 )) 00:01:25.189 + nvme_files["nvme-ftl.img"]=6G 00:01:25.189 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:25.189 + nvme_files["nvme-fdp.img"]=1G 00:01:25.189 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:25.189 + for nvme in "${!nvme_files[@]}" 00:01:25.189 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi2.img -s 4G 00:01:25.189 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.189 + for nvme in "${!nvme_files[@]}" 00:01:25.189 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-ftl.img -s 6G 00:01:25.189 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:25.189 + for nvme in "${!nvme_files[@]}" 00:01:25.189 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-cmb.img -s 5G 00:01:25.189 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.189 + for nvme in "${!nvme_files[@]}" 00:01:25.189 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-openstack.img -s 8G 00:01:25.189 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:25.189 + for nvme in "${!nvme_files[@]}" 00:01:25.189 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-zns.img -s 5G 00:01:25.451 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.451 + for nvme in "${!nvme_files[@]}" 00:01:25.451 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi1.img -s 4G 00:01:25.451 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.451 + for nvme in "${!nvme_files[@]}" 00:01:25.451 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-multi0.img -s 4G 00:01:25.451 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:25.451 + for nvme in "${!nvme_files[@]}" 00:01:25.451 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme-fdp.img -s 1G 00:01:25.451 Formatting '/var/lib/libvirt/images/backends/ex10-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:25.451 + for nvme in "${!nvme_files[@]}" 00:01:25.451 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex10-nvme.img -s 5G 00:01:25.711 Formatting '/var/lib/libvirt/images/backends/ex10-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:25.711 ++ sudo grep -rl ex10-nvme.img /etc/libvirt/qemu 00:01:25.711 + echo 'End stage prepare_nvme.sh' 00:01:25.711 End stage prepare_nvme.sh 00:01:25.722 [Pipeline] sh 00:01:26.002 + DISTRO=fedora39 00:01:26.002 + CPUS=10 00:01:26.002 + RAM=12288 00:01:26.002 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:26.002 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex10-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex10-nvme.img -b /var/lib/libvirt/images/backends/ex10-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex10-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:26.002 00:01:26.002 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:26.002 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:26.002 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:26.002 HELP=0 00:01:26.002 DRY_RUN=0 00:01:26.002 NVME_FILE=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,/var/lib/libvirt/images/backends/ex10-nvme.img,/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,/var/lib/libvirt/images/backends/ex10-nvme-fdp.img, 00:01:26.002 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:26.002 NVME_AUTO_CREATE=0 00:01:26.002 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex10-nvme-multi1.img:/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,, 00:01:26.002 NVME_CMB=,,,, 00:01:26.002 NVME_PMR=,,,, 00:01:26.002 NVME_ZNS=,,,, 00:01:26.002 NVME_MS=true,,,, 00:01:26.002 NVME_FDP=,,,on, 00:01:26.002 SPDK_VAGRANT_DISTRO=fedora39 00:01:26.002 SPDK_VAGRANT_VMCPU=10 00:01:26.002 SPDK_VAGRANT_VMRAM=12288 00:01:26.002 SPDK_VAGRANT_PROVIDER=libvirt 00:01:26.002 SPDK_VAGRANT_HTTP_PROXY= 00:01:26.002 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:26.002 SPDK_OPENSTACK_NETWORK=0 00:01:26.002 VAGRANT_PACKAGE_BOX=0 00:01:26.002 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:26.002 FORCE_DISTRO=true 00:01:26.002 VAGRANT_BOX_VERSION= 00:01:26.002 EXTRA_VAGRANTFILES= 00:01:26.002 NIC_MODEL=e1000 00:01:26.002 00:01:26.002 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:26.002 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:28.585 Bringing machine 'default' up with 'libvirt' provider... 00:01:28.846 ==> default: Creating image (snapshot of base box volume). 00:01:28.846 ==> default: Creating domain with the following settings... 00:01:28.846 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734179248_bbe5b978b159296a171d 00:01:28.846 ==> default: -- Domain type: kvm 00:01:28.846 ==> default: -- Cpus: 10 00:01:28.846 ==> default: -- Feature: acpi 00:01:28.846 ==> default: -- Feature: apic 00:01:28.846 ==> default: -- Feature: pae 00:01:28.846 ==> default: -- Memory: 12288M 00:01:28.846 ==> default: -- Memory Backing: hugepages: 00:01:28.846 ==> default: -- Management MAC: 00:01:28.846 ==> default: -- Loader: 00:01:28.846 ==> default: -- Nvram: 00:01:28.846 ==> default: -- Base box: spdk/fedora39 00:01:28.846 ==> default: -- Storage pool: default 00:01:28.846 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734179248_bbe5b978b159296a171d.img (20G) 00:01:28.846 ==> default: -- Volume Cache: default 00:01:28.846 ==> default: -- Kernel: 00:01:28.846 ==> default: -- Initrd: 00:01:28.846 ==> default: -- Graphics Type: vnc 00:01:28.846 ==> default: -- Graphics Port: -1 00:01:28.846 ==> default: -- Graphics IP: 127.0.0.1 00:01:28.846 ==> default: -- Graphics Password: Not defined 00:01:28.846 ==> default: -- Video Type: cirrus 00:01:28.846 ==> default: -- Video VRAM: 9216 00:01:28.846 ==> default: -- Sound Type: 00:01:28.846 ==> default: -- Keymap: en-us 00:01:28.846 ==> default: -- TPM Path: 00:01:28.846 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:28.846 ==> default: -- Command line args: 00:01:28.846 ==> default: -> value=-device, 00:01:28.846 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:28.846 ==> default: -> value=-drive, 00:01:28.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:28.846 ==> default: -> value=-device, 00:01:28.846 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:28.846 ==> default: -> value=-device, 00:01:28.846 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:28.846 ==> default: -> value=-drive, 00:01:28.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme.img,if=none,id=nvme-1-drive0, 00:01:28.846 ==> default: -> value=-device, 00:01:28.846 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.846 ==> default: -> value=-device, 00:01:28.846 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:28.846 ==> default: -> value=-drive, 00:01:28.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:28.846 ==> default: -> value=-device, 00:01:28.846 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.846 ==> default: -> value=-drive, 00:01:28.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:28.846 ==> default: -> value=-device, 00:01:28.846 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.846 ==> default: -> value=-drive, 00:01:28.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:28.846 ==> default: -> value=-device, 00:01:28.846 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:28.846 ==> default: -> value=-device, 00:01:28.846 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:28.846 ==> default: -> value=-device, 00:01:28.846 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:28.846 ==> default: -> value=-drive, 00:01:28.846 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex10-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:28.846 ==> default: -> value=-device, 00:01:28.846 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:29.107 ==> default: Creating shared folders metadata... 00:01:29.107 ==> default: Starting domain. 00:01:31.023 ==> default: Waiting for domain to get an IP address... 00:01:49.151 ==> default: Waiting for SSH to become available... 00:01:49.151 ==> default: Configuring and enabling network interfaces... 00:01:53.358 default: SSH address: 192.168.121.61:22 00:01:53.358 default: SSH username: vagrant 00:01:53.358 default: SSH auth method: private key 00:01:54.746 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:02.888 ==> default: Mounting SSHFS shared folder... 00:02:05.429 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:05.429 ==> default: Checking Mount.. 00:02:05.995 ==> default: Folder Successfully Mounted! 00:02:05.995 00:02:05.995 SUCCESS! 00:02:05.995 00:02:05.995 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:05.995 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:05.995 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:05.995 00:02:06.262 [Pipeline] } 00:02:06.277 [Pipeline] // stage 00:02:06.285 [Pipeline] dir 00:02:06.286 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:06.288 [Pipeline] { 00:02:06.300 [Pipeline] catchError 00:02:06.301 [Pipeline] { 00:02:06.314 [Pipeline] sh 00:02:06.685 + vagrant ssh-config --host vagrant 00:02:06.685 + sed -ne '/^Host/,$p' 00:02:06.685 + tee ssh_conf 00:02:09.216 Host vagrant 00:02:09.216 HostName 192.168.121.61 00:02:09.216 User vagrant 00:02:09.216 Port 22 00:02:09.216 UserKnownHostsFile /dev/null 00:02:09.216 StrictHostKeyChecking no 00:02:09.216 PasswordAuthentication no 00:02:09.216 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:09.216 IdentitiesOnly yes 00:02:09.216 LogLevel FATAL 00:02:09.216 ForwardAgent yes 00:02:09.216 ForwardX11 yes 00:02:09.216 00:02:09.227 [Pipeline] withEnv 00:02:09.229 [Pipeline] { 00:02:09.237 [Pipeline] sh 00:02:09.509 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:09.509 source /etc/os-release 00:02:09.509 [[ -e /image.version ]] && img=$(< /image.version) 00:02:09.509 # Minimal, systemd-like check. 00:02:09.509 if [[ -e /.dockerenv ]]; then 00:02:09.509 # Clear garbage from the node'\''s name: 00:02:09.509 # agt-er_autotest_547-896 -> autotest_547-896 00:02:09.509 # $HOSTNAME is the actual container id 00:02:09.509 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:09.509 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:09.509 # We can assume this is a mount from a host where container is running, 00:02:09.509 # so fetch its hostname to easily identify the target swarm worker. 00:02:09.509 container="$(< /etc/hostname) ($agent)" 00:02:09.509 else 00:02:09.509 # Fallback 00:02:09.509 container=$agent 00:02:09.509 fi 00:02:09.509 fi 00:02:09.509 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:09.509 ' 00:02:09.777 [Pipeline] } 00:02:09.796 [Pipeline] // withEnv 00:02:09.804 [Pipeline] setCustomBuildProperty 00:02:09.814 [Pipeline] stage 00:02:09.816 [Pipeline] { (Tests) 00:02:09.828 [Pipeline] sh 00:02:10.105 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:10.374 [Pipeline] sh 00:02:10.650 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:10.922 [Pipeline] timeout 00:02:10.922 Timeout set to expire in 50 min 00:02:10.925 [Pipeline] { 00:02:10.948 [Pipeline] sh 00:02:11.227 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:11.794 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:02:11.805 [Pipeline] sh 00:02:12.084 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:12.355 [Pipeline] sh 00:02:12.633 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:12.906 [Pipeline] sh 00:02:13.185 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:13.185 ++ readlink -f spdk_repo 00:02:13.185 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:13.185 + [[ -n /home/vagrant/spdk_repo ]] 00:02:13.185 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:13.185 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:13.185 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:13.185 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:13.185 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:13.185 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:13.185 + cd /home/vagrant/spdk_repo 00:02:13.185 + source /etc/os-release 00:02:13.185 ++ NAME='Fedora Linux' 00:02:13.185 ++ VERSION='39 (Cloud Edition)' 00:02:13.185 ++ ID=fedora 00:02:13.185 ++ VERSION_ID=39 00:02:13.185 ++ VERSION_CODENAME= 00:02:13.185 ++ PLATFORM_ID=platform:f39 00:02:13.185 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:13.185 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:13.185 ++ LOGO=fedora-logo-icon 00:02:13.185 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:13.185 ++ HOME_URL=https://fedoraproject.org/ 00:02:13.185 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:13.185 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:13.185 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:13.185 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:13.185 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:13.185 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:13.185 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:13.185 ++ SUPPORT_END=2024-11-12 00:02:13.185 ++ VARIANT='Cloud Edition' 00:02:13.185 ++ VARIANT_ID=cloud 00:02:13.185 + uname -a 00:02:13.185 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:13.185 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:13.752 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:13.752 Hugepages 00:02:13.752 node hugesize free / total 00:02:13.752 node0 1048576kB 0 / 0 00:02:13.752 node0 2048kB 0 / 0 00:02:13.752 00:02:13.752 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:13.752 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:13.752 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:14.011 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:14.011 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:02:14.011 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:14.011 + rm -f /tmp/spdk-ld-path 00:02:14.011 + source autorun-spdk.conf 00:02:14.011 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.011 ++ SPDK_TEST_NVME=1 00:02:14.011 ++ SPDK_TEST_FTL=1 00:02:14.011 ++ SPDK_TEST_ISAL=1 00:02:14.011 ++ SPDK_RUN_ASAN=1 00:02:14.011 ++ SPDK_RUN_UBSAN=1 00:02:14.011 ++ SPDK_TEST_XNVME=1 00:02:14.011 ++ SPDK_TEST_NVME_FDP=1 00:02:14.011 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.011 ++ RUN_NIGHTLY=1 00:02:14.011 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:14.011 + [[ -n '' ]] 00:02:14.011 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:14.011 + for M in /var/spdk/build-*-manifest.txt 00:02:14.011 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:14.011 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.011 + for M in /var/spdk/build-*-manifest.txt 00:02:14.011 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:14.011 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.011 + for M in /var/spdk/build-*-manifest.txt 00:02:14.011 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:14.011 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.011 ++ uname 00:02:14.011 + [[ Linux == \L\i\n\u\x ]] 00:02:14.011 + sudo dmesg -T 00:02:14.011 + sudo dmesg --clear 00:02:14.011 + dmesg_pid=5024 00:02:14.011 + sudo dmesg -Tw 00:02:14.011 + [[ Fedora Linux == FreeBSD ]] 00:02:14.011 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.011 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.011 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:14.011 + [[ -x /usr/src/fio-static/fio ]] 00:02:14.011 + export FIO_BIN=/usr/src/fio-static/fio 00:02:14.011 + FIO_BIN=/usr/src/fio-static/fio 00:02:14.011 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:14.011 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:14.011 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:14.011 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.011 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.011 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:14.011 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.011 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.011 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:14.011 12:28:13 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:14.011 12:28:13 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:14.011 12:28:13 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.011 12:28:13 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:14.011 12:28:13 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:14.011 12:28:13 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:14.011 12:28:13 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:14.011 12:28:13 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:14.011 12:28:13 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:14.011 12:28:13 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:14.011 12:28:13 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.011 12:28:13 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:14.011 12:28:13 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:14.011 12:28:13 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:14.011 12:28:13 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:14.011 12:28:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:14.011 12:28:13 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:14.011 12:28:13 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:14.011 12:28:13 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:14.011 12:28:13 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:14.011 12:28:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.011 12:28:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.011 12:28:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.011 12:28:13 -- paths/export.sh@5 -- $ export PATH 00:02:14.011 12:28:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.011 12:28:13 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:14.011 12:28:13 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:14.011 12:28:13 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734179293.XXXXXX 00:02:14.011 12:28:13 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734179293.Abn7Bs 00:02:14.011 12:28:13 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:14.011 12:28:13 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:14.011 12:28:13 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:14.011 12:28:13 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:14.011 12:28:13 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:14.011 12:28:13 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:14.011 12:28:13 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:14.011 12:28:13 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.011 12:28:13 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:14.011 12:28:13 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:14.270 12:28:13 -- pm/common@17 -- $ local monitor 00:02:14.270 12:28:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.270 12:28:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:14.270 12:28:13 -- pm/common@25 -- $ sleep 1 00:02:14.270 12:28:13 -- pm/common@21 -- $ date +%s 00:02:14.270 12:28:13 -- pm/common@21 -- $ date +%s 00:02:14.270 12:28:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734179293 00:02:14.270 12:28:13 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734179293 00:02:14.270 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734179293_collect-cpu-load.pm.log 00:02:14.270 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734179293_collect-vmstat.pm.log 00:02:15.205 12:28:14 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:15.205 12:28:14 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:15.205 12:28:14 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:15.205 12:28:14 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:15.205 12:28:14 -- spdk/autobuild.sh@16 -- $ date -u 00:02:15.205 Sat Dec 14 12:28:14 PM UTC 2024 00:02:15.205 12:28:14 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:15.205 v25.01-rc1-2-ge01cb43b8 00:02:15.205 12:28:14 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:15.205 12:28:14 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:15.205 12:28:14 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:15.205 12:28:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:15.205 12:28:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.205 ************************************ 00:02:15.205 START TEST asan 00:02:15.205 ************************************ 00:02:15.205 using asan 00:02:15.205 12:28:14 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:15.205 00:02:15.205 real 0m0.000s 00:02:15.205 user 0m0.000s 00:02:15.205 sys 0m0.000s 00:02:15.205 12:28:14 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:15.205 12:28:14 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:15.205 ************************************ 00:02:15.205 END TEST asan 00:02:15.205 ************************************ 00:02:15.205 12:28:14 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:15.205 12:28:14 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:15.205 12:28:14 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:15.205 12:28:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:15.205 12:28:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.205 ************************************ 00:02:15.205 START TEST ubsan 00:02:15.205 ************************************ 00:02:15.205 using ubsan 00:02:15.205 12:28:14 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:15.205 00:02:15.205 real 0m0.000s 00:02:15.205 user 0m0.000s 00:02:15.205 sys 0m0.000s 00:02:15.205 12:28:14 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:15.205 12:28:14 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:15.205 ************************************ 00:02:15.205 END TEST ubsan 00:02:15.205 ************************************ 00:02:15.205 12:28:14 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:15.205 12:28:14 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:15.205 12:28:14 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:15.205 12:28:14 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:15.205 12:28:14 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:15.205 12:28:14 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:15.205 12:28:14 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:15.205 12:28:14 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:15.205 12:28:14 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:15.205 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:15.205 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:15.770 Using 'verbs' RDMA provider 00:02:26.328 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:36.334 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:36.334 Creating mk/config.mk...done. 00:02:36.334 Creating mk/cc.flags.mk...done. 00:02:36.334 Type 'make' to build. 00:02:36.334 12:28:35 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:36.334 12:28:35 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:36.334 12:28:35 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:36.334 12:28:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.334 ************************************ 00:02:36.334 START TEST make 00:02:36.334 ************************************ 00:02:36.334 12:28:35 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:36.334 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:36.334 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:36.334 meson setup builddir \ 00:02:36.334 -Dwith-libaio=enabled \ 00:02:36.334 -Dwith-liburing=enabled \ 00:02:36.334 -Dwith-libvfn=disabled \ 00:02:36.334 -Dwith-spdk=disabled \ 00:02:36.334 -Dexamples=false \ 00:02:36.334 -Dtests=false \ 00:02:36.334 -Dtools=false && \ 00:02:36.334 meson compile -C builddir && \ 00:02:36.334 cd -) 00:02:38.234 The Meson build system 00:02:38.234 Version: 1.5.0 00:02:38.234 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:38.234 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:38.234 Build type: native build 00:02:38.234 Project name: xnvme 00:02:38.234 Project version: 0.7.5 00:02:38.234 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:38.234 C linker for the host machine: cc ld.bfd 2.40-14 00:02:38.234 Host machine cpu family: x86_64 00:02:38.234 Host machine cpu: x86_64 00:02:38.234 Message: host_machine.system: linux 00:02:38.234 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:38.234 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:38.234 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:38.234 Run-time dependency threads found: YES 00:02:38.234 Has header "setupapi.h" : NO 00:02:38.234 Has header "linux/blkzoned.h" : YES 00:02:38.234 Has header "linux/blkzoned.h" : YES (cached) 00:02:38.234 Has header "libaio.h" : YES 00:02:38.234 Library aio found: YES 00:02:38.234 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:38.234 Run-time dependency liburing found: YES 2.2 00:02:38.234 Dependency libvfn skipped: feature with-libvfn disabled 00:02:38.234 Found CMake: /usr/bin/cmake (3.27.7) 00:02:38.234 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:38.234 Subproject spdk : skipped: feature with-spdk disabled 00:02:38.234 Run-time dependency appleframeworks found: NO (tried framework) 00:02:38.234 Run-time dependency appleframeworks found: NO (tried framework) 00:02:38.234 Library rt found: YES 00:02:38.234 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:38.234 Configuring xnvme_config.h using configuration 00:02:38.234 Configuring xnvme.spec using configuration 00:02:38.234 Run-time dependency bash-completion found: YES 2.11 00:02:38.234 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:38.234 Program cp found: YES (/usr/bin/cp) 00:02:38.234 Build targets in project: 3 00:02:38.234 00:02:38.234 xnvme 0.7.5 00:02:38.234 00:02:38.234 Subprojects 00:02:38.234 spdk : NO Feature 'with-spdk' disabled 00:02:38.234 00:02:38.234 User defined options 00:02:38.234 examples : false 00:02:38.234 tests : false 00:02:38.234 tools : false 00:02:38.234 with-libaio : enabled 00:02:38.234 with-liburing: enabled 00:02:38.234 with-libvfn : disabled 00:02:38.234 with-spdk : disabled 00:02:38.234 00:02:38.234 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.802 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:38.802 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:38.802 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:38.802 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:38.802 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:38.802 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:38.802 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:38.802 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:38.802 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:38.802 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:38.802 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:38.802 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:38.802 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:39.062 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:39.062 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:39.062 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:39.062 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:39.062 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:39.062 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:39.062 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:39.062 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:39.062 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:39.062 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:39.062 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:39.062 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:39.062 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:39.062 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:39.062 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:39.062 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:39.062 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:39.062 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:39.062 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:39.062 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:39.062 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:39.062 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:39.062 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:39.062 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:39.062 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:39.062 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:39.062 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:39.062 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:39.062 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:39.062 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:39.323 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:39.323 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:39.323 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:39.323 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:39.323 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:39.323 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:39.323 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:39.323 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:39.323 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:39.323 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:39.323 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:39.323 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:39.323 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:39.323 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:39.323 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:39.323 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:39.323 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:39.323 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:39.323 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:39.323 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:39.323 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:39.323 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:39.323 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:39.323 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:39.323 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:39.323 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:39.323 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:39.583 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:39.583 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:39.583 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:39.583 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:39.841 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:39.841 [75/76] Linking static target lib/libxnvme.a 00:02:39.841 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:39.841 INFO: autodetecting backend as ninja 00:02:39.841 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:39.841 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:46.400 The Meson build system 00:02:46.400 Version: 1.5.0 00:02:46.400 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:46.400 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:46.400 Build type: native build 00:02:46.400 Program cat found: YES (/usr/bin/cat) 00:02:46.400 Project name: DPDK 00:02:46.400 Project version: 24.03.0 00:02:46.400 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:46.400 C linker for the host machine: cc ld.bfd 2.40-14 00:02:46.400 Host machine cpu family: x86_64 00:02:46.400 Host machine cpu: x86_64 00:02:46.400 Message: ## Building in Developer Mode ## 00:02:46.400 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:46.400 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:46.400 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:46.400 Program python3 found: YES (/usr/bin/python3) 00:02:46.400 Program cat found: YES (/usr/bin/cat) 00:02:46.400 Compiler for C supports arguments -march=native: YES 00:02:46.400 Checking for size of "void *" : 8 00:02:46.400 Checking for size of "void *" : 8 (cached) 00:02:46.400 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:46.400 Library m found: YES 00:02:46.400 Library numa found: YES 00:02:46.400 Has header "numaif.h" : YES 00:02:46.400 Library fdt found: NO 00:02:46.400 Library execinfo found: NO 00:02:46.400 Has header "execinfo.h" : YES 00:02:46.400 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:46.400 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:46.400 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:46.400 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:46.400 Run-time dependency openssl found: YES 3.1.1 00:02:46.400 Run-time dependency libpcap found: YES 1.10.4 00:02:46.400 Has header "pcap.h" with dependency libpcap: YES 00:02:46.400 Compiler for C supports arguments -Wcast-qual: YES 00:02:46.400 Compiler for C supports arguments -Wdeprecated: YES 00:02:46.400 Compiler for C supports arguments -Wformat: YES 00:02:46.400 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:46.400 Compiler for C supports arguments -Wformat-security: NO 00:02:46.400 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:46.400 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:46.400 Compiler for C supports arguments -Wnested-externs: YES 00:02:46.400 Compiler for C supports arguments -Wold-style-definition: YES 00:02:46.400 Compiler for C supports arguments -Wpointer-arith: YES 00:02:46.400 Compiler for C supports arguments -Wsign-compare: YES 00:02:46.400 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:46.400 Compiler for C supports arguments -Wundef: YES 00:02:46.400 Compiler for C supports arguments -Wwrite-strings: YES 00:02:46.400 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:46.400 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:46.400 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:46.400 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:46.400 Program objdump found: YES (/usr/bin/objdump) 00:02:46.400 Compiler for C supports arguments -mavx512f: YES 00:02:46.400 Checking if "AVX512 checking" compiles: YES 00:02:46.400 Fetching value of define "__SSE4_2__" : 1 00:02:46.400 Fetching value of define "__AES__" : 1 00:02:46.400 Fetching value of define "__AVX__" : 1 00:02:46.400 Fetching value of define "__AVX2__" : 1 00:02:46.400 Fetching value of define "__AVX512BW__" : 1 00:02:46.400 Fetching value of define "__AVX512CD__" : 1 00:02:46.400 Fetching value of define "__AVX512DQ__" : 1 00:02:46.400 Fetching value of define "__AVX512F__" : 1 00:02:46.400 Fetching value of define "__AVX512VL__" : 1 00:02:46.400 Fetching value of define "__PCLMUL__" : 1 00:02:46.400 Fetching value of define "__RDRND__" : 1 00:02:46.400 Fetching value of define "__RDSEED__" : 1 00:02:46.400 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:46.400 Fetching value of define "__znver1__" : (undefined) 00:02:46.400 Fetching value of define "__znver2__" : (undefined) 00:02:46.400 Fetching value of define "__znver3__" : (undefined) 00:02:46.400 Fetching value of define "__znver4__" : (undefined) 00:02:46.400 Library asan found: YES 00:02:46.400 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:46.400 Message: lib/log: Defining dependency "log" 00:02:46.400 Message: lib/kvargs: Defining dependency "kvargs" 00:02:46.400 Message: lib/telemetry: Defining dependency "telemetry" 00:02:46.400 Library rt found: YES 00:02:46.400 Checking for function "getentropy" : NO 00:02:46.400 Message: lib/eal: Defining dependency "eal" 00:02:46.400 Message: lib/ring: Defining dependency "ring" 00:02:46.400 Message: lib/rcu: Defining dependency "rcu" 00:02:46.400 Message: lib/mempool: Defining dependency "mempool" 00:02:46.400 Message: lib/mbuf: Defining dependency "mbuf" 00:02:46.400 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:46.400 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:46.400 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:46.400 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:46.400 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:46.400 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:46.400 Compiler for C supports arguments -mpclmul: YES 00:02:46.400 Compiler for C supports arguments -maes: YES 00:02:46.400 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:46.400 Compiler for C supports arguments -mavx512bw: YES 00:02:46.400 Compiler for C supports arguments -mavx512dq: YES 00:02:46.400 Compiler for C supports arguments -mavx512vl: YES 00:02:46.400 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:46.400 Compiler for C supports arguments -mavx2: YES 00:02:46.400 Compiler for C supports arguments -mavx: YES 00:02:46.400 Message: lib/net: Defining dependency "net" 00:02:46.400 Message: lib/meter: Defining dependency "meter" 00:02:46.400 Message: lib/ethdev: Defining dependency "ethdev" 00:02:46.400 Message: lib/pci: Defining dependency "pci" 00:02:46.400 Message: lib/cmdline: Defining dependency "cmdline" 00:02:46.400 Message: lib/hash: Defining dependency "hash" 00:02:46.400 Message: lib/timer: Defining dependency "timer" 00:02:46.400 Message: lib/compressdev: Defining dependency "compressdev" 00:02:46.400 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:46.400 Message: lib/dmadev: Defining dependency "dmadev" 00:02:46.400 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:46.400 Message: lib/power: Defining dependency "power" 00:02:46.400 Message: lib/reorder: Defining dependency "reorder" 00:02:46.400 Message: lib/security: Defining dependency "security" 00:02:46.400 Has header "linux/userfaultfd.h" : YES 00:02:46.400 Has header "linux/vduse.h" : YES 00:02:46.400 Message: lib/vhost: Defining dependency "vhost" 00:02:46.400 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:46.400 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:46.400 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:46.400 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:46.400 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:46.400 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:46.400 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:46.400 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:46.400 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:46.400 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:46.400 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:46.400 Configuring doxy-api-html.conf using configuration 00:02:46.400 Configuring doxy-api-man.conf using configuration 00:02:46.400 Program mandb found: YES (/usr/bin/mandb) 00:02:46.400 Program sphinx-build found: NO 00:02:46.400 Configuring rte_build_config.h using configuration 00:02:46.400 Message: 00:02:46.400 ================= 00:02:46.400 Applications Enabled 00:02:46.400 ================= 00:02:46.400 00:02:46.400 apps: 00:02:46.400 00:02:46.400 00:02:46.400 Message: 00:02:46.400 ================= 00:02:46.400 Libraries Enabled 00:02:46.400 ================= 00:02:46.400 00:02:46.400 libs: 00:02:46.400 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:46.400 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:46.400 cryptodev, dmadev, power, reorder, security, vhost, 00:02:46.400 00:02:46.400 Message: 00:02:46.400 =============== 00:02:46.400 Drivers Enabled 00:02:46.400 =============== 00:02:46.400 00:02:46.400 common: 00:02:46.400 00:02:46.400 bus: 00:02:46.401 pci, vdev, 00:02:46.401 mempool: 00:02:46.401 ring, 00:02:46.401 dma: 00:02:46.401 00:02:46.401 net: 00:02:46.401 00:02:46.401 crypto: 00:02:46.401 00:02:46.401 compress: 00:02:46.401 00:02:46.401 vdpa: 00:02:46.401 00:02:46.401 00:02:46.401 Message: 00:02:46.401 ================= 00:02:46.401 Content Skipped 00:02:46.401 ================= 00:02:46.401 00:02:46.401 apps: 00:02:46.401 dumpcap: explicitly disabled via build config 00:02:46.401 graph: explicitly disabled via build config 00:02:46.401 pdump: explicitly disabled via build config 00:02:46.401 proc-info: explicitly disabled via build config 00:02:46.401 test-acl: explicitly disabled via build config 00:02:46.401 test-bbdev: explicitly disabled via build config 00:02:46.401 test-cmdline: explicitly disabled via build config 00:02:46.401 test-compress-perf: explicitly disabled via build config 00:02:46.401 test-crypto-perf: explicitly disabled via build config 00:02:46.401 test-dma-perf: explicitly disabled via build config 00:02:46.401 test-eventdev: explicitly disabled via build config 00:02:46.401 test-fib: explicitly disabled via build config 00:02:46.401 test-flow-perf: explicitly disabled via build config 00:02:46.401 test-gpudev: explicitly disabled via build config 00:02:46.401 test-mldev: explicitly disabled via build config 00:02:46.401 test-pipeline: explicitly disabled via build config 00:02:46.401 test-pmd: explicitly disabled via build config 00:02:46.401 test-regex: explicitly disabled via build config 00:02:46.401 test-sad: explicitly disabled via build config 00:02:46.401 test-security-perf: explicitly disabled via build config 00:02:46.401 00:02:46.401 libs: 00:02:46.401 argparse: explicitly disabled via build config 00:02:46.401 metrics: explicitly disabled via build config 00:02:46.401 acl: explicitly disabled via build config 00:02:46.401 bbdev: explicitly disabled via build config 00:02:46.401 bitratestats: explicitly disabled via build config 00:02:46.401 bpf: explicitly disabled via build config 00:02:46.401 cfgfile: explicitly disabled via build config 00:02:46.401 distributor: explicitly disabled via build config 00:02:46.401 efd: explicitly disabled via build config 00:02:46.401 eventdev: explicitly disabled via build config 00:02:46.401 dispatcher: explicitly disabled via build config 00:02:46.401 gpudev: explicitly disabled via build config 00:02:46.401 gro: explicitly disabled via build config 00:02:46.401 gso: explicitly disabled via build config 00:02:46.401 ip_frag: explicitly disabled via build config 00:02:46.401 jobstats: explicitly disabled via build config 00:02:46.401 latencystats: explicitly disabled via build config 00:02:46.401 lpm: explicitly disabled via build config 00:02:46.401 member: explicitly disabled via build config 00:02:46.401 pcapng: explicitly disabled via build config 00:02:46.401 rawdev: explicitly disabled via build config 00:02:46.401 regexdev: explicitly disabled via build config 00:02:46.401 mldev: explicitly disabled via build config 00:02:46.401 rib: explicitly disabled via build config 00:02:46.401 sched: explicitly disabled via build config 00:02:46.401 stack: explicitly disabled via build config 00:02:46.401 ipsec: explicitly disabled via build config 00:02:46.401 pdcp: explicitly disabled via build config 00:02:46.401 fib: explicitly disabled via build config 00:02:46.401 port: explicitly disabled via build config 00:02:46.401 pdump: explicitly disabled via build config 00:02:46.401 table: explicitly disabled via build config 00:02:46.401 pipeline: explicitly disabled via build config 00:02:46.401 graph: explicitly disabled via build config 00:02:46.401 node: explicitly disabled via build config 00:02:46.401 00:02:46.401 drivers: 00:02:46.401 common/cpt: not in enabled drivers build config 00:02:46.401 common/dpaax: not in enabled drivers build config 00:02:46.401 common/iavf: not in enabled drivers build config 00:02:46.401 common/idpf: not in enabled drivers build config 00:02:46.401 common/ionic: not in enabled drivers build config 00:02:46.401 common/mvep: not in enabled drivers build config 00:02:46.401 common/octeontx: not in enabled drivers build config 00:02:46.401 bus/auxiliary: not in enabled drivers build config 00:02:46.401 bus/cdx: not in enabled drivers build config 00:02:46.401 bus/dpaa: not in enabled drivers build config 00:02:46.401 bus/fslmc: not in enabled drivers build config 00:02:46.401 bus/ifpga: not in enabled drivers build config 00:02:46.401 bus/platform: not in enabled drivers build config 00:02:46.401 bus/uacce: not in enabled drivers build config 00:02:46.401 bus/vmbus: not in enabled drivers build config 00:02:46.401 common/cnxk: not in enabled drivers build config 00:02:46.401 common/mlx5: not in enabled drivers build config 00:02:46.401 common/nfp: not in enabled drivers build config 00:02:46.401 common/nitrox: not in enabled drivers build config 00:02:46.401 common/qat: not in enabled drivers build config 00:02:46.401 common/sfc_efx: not in enabled drivers build config 00:02:46.401 mempool/bucket: not in enabled drivers build config 00:02:46.401 mempool/cnxk: not in enabled drivers build config 00:02:46.401 mempool/dpaa: not in enabled drivers build config 00:02:46.401 mempool/dpaa2: not in enabled drivers build config 00:02:46.401 mempool/octeontx: not in enabled drivers build config 00:02:46.401 mempool/stack: not in enabled drivers build config 00:02:46.401 dma/cnxk: not in enabled drivers build config 00:02:46.401 dma/dpaa: not in enabled drivers build config 00:02:46.401 dma/dpaa2: not in enabled drivers build config 00:02:46.401 dma/hisilicon: not in enabled drivers build config 00:02:46.401 dma/idxd: not in enabled drivers build config 00:02:46.401 dma/ioat: not in enabled drivers build config 00:02:46.401 dma/skeleton: not in enabled drivers build config 00:02:46.401 net/af_packet: not in enabled drivers build config 00:02:46.401 net/af_xdp: not in enabled drivers build config 00:02:46.401 net/ark: not in enabled drivers build config 00:02:46.401 net/atlantic: not in enabled drivers build config 00:02:46.401 net/avp: not in enabled drivers build config 00:02:46.401 net/axgbe: not in enabled drivers build config 00:02:46.401 net/bnx2x: not in enabled drivers build config 00:02:46.401 net/bnxt: not in enabled drivers build config 00:02:46.401 net/bonding: not in enabled drivers build config 00:02:46.401 net/cnxk: not in enabled drivers build config 00:02:46.401 net/cpfl: not in enabled drivers build config 00:02:46.401 net/cxgbe: not in enabled drivers build config 00:02:46.401 net/dpaa: not in enabled drivers build config 00:02:46.401 net/dpaa2: not in enabled drivers build config 00:02:46.401 net/e1000: not in enabled drivers build config 00:02:46.401 net/ena: not in enabled drivers build config 00:02:46.401 net/enetc: not in enabled drivers build config 00:02:46.401 net/enetfec: not in enabled drivers build config 00:02:46.401 net/enic: not in enabled drivers build config 00:02:46.401 net/failsafe: not in enabled drivers build config 00:02:46.401 net/fm10k: not in enabled drivers build config 00:02:46.401 net/gve: not in enabled drivers build config 00:02:46.401 net/hinic: not in enabled drivers build config 00:02:46.401 net/hns3: not in enabled drivers build config 00:02:46.401 net/i40e: not in enabled drivers build config 00:02:46.401 net/iavf: not in enabled drivers build config 00:02:46.401 net/ice: not in enabled drivers build config 00:02:46.401 net/idpf: not in enabled drivers build config 00:02:46.401 net/igc: not in enabled drivers build config 00:02:46.401 net/ionic: not in enabled drivers build config 00:02:46.401 net/ipn3ke: not in enabled drivers build config 00:02:46.401 net/ixgbe: not in enabled drivers build config 00:02:46.401 net/mana: not in enabled drivers build config 00:02:46.401 net/memif: not in enabled drivers build config 00:02:46.401 net/mlx4: not in enabled drivers build config 00:02:46.401 net/mlx5: not in enabled drivers build config 00:02:46.401 net/mvneta: not in enabled drivers build config 00:02:46.401 net/mvpp2: not in enabled drivers build config 00:02:46.401 net/netvsc: not in enabled drivers build config 00:02:46.401 net/nfb: not in enabled drivers build config 00:02:46.401 net/nfp: not in enabled drivers build config 00:02:46.401 net/ngbe: not in enabled drivers build config 00:02:46.401 net/null: not in enabled drivers build config 00:02:46.401 net/octeontx: not in enabled drivers build config 00:02:46.401 net/octeon_ep: not in enabled drivers build config 00:02:46.401 net/pcap: not in enabled drivers build config 00:02:46.401 net/pfe: not in enabled drivers build config 00:02:46.401 net/qede: not in enabled drivers build config 00:02:46.401 net/ring: not in enabled drivers build config 00:02:46.401 net/sfc: not in enabled drivers build config 00:02:46.401 net/softnic: not in enabled drivers build config 00:02:46.401 net/tap: not in enabled drivers build config 00:02:46.401 net/thunderx: not in enabled drivers build config 00:02:46.401 net/txgbe: not in enabled drivers build config 00:02:46.401 net/vdev_netvsc: not in enabled drivers build config 00:02:46.401 net/vhost: not in enabled drivers build config 00:02:46.401 net/virtio: not in enabled drivers build config 00:02:46.401 net/vmxnet3: not in enabled drivers build config 00:02:46.401 raw/*: missing internal dependency, "rawdev" 00:02:46.401 crypto/armv8: not in enabled drivers build config 00:02:46.401 crypto/bcmfs: not in enabled drivers build config 00:02:46.401 crypto/caam_jr: not in enabled drivers build config 00:02:46.401 crypto/ccp: not in enabled drivers build config 00:02:46.401 crypto/cnxk: not in enabled drivers build config 00:02:46.401 crypto/dpaa_sec: not in enabled drivers build config 00:02:46.401 crypto/dpaa2_sec: not in enabled drivers build config 00:02:46.401 crypto/ipsec_mb: not in enabled drivers build config 00:02:46.401 crypto/mlx5: not in enabled drivers build config 00:02:46.401 crypto/mvsam: not in enabled drivers build config 00:02:46.401 crypto/nitrox: not in enabled drivers build config 00:02:46.401 crypto/null: not in enabled drivers build config 00:02:46.401 crypto/octeontx: not in enabled drivers build config 00:02:46.401 crypto/openssl: not in enabled drivers build config 00:02:46.401 crypto/scheduler: not in enabled drivers build config 00:02:46.401 crypto/uadk: not in enabled drivers build config 00:02:46.401 crypto/virtio: not in enabled drivers build config 00:02:46.401 compress/isal: not in enabled drivers build config 00:02:46.401 compress/mlx5: not in enabled drivers build config 00:02:46.401 compress/nitrox: not in enabled drivers build config 00:02:46.401 compress/octeontx: not in enabled drivers build config 00:02:46.401 compress/zlib: not in enabled drivers build config 00:02:46.401 regex/*: missing internal dependency, "regexdev" 00:02:46.401 ml/*: missing internal dependency, "mldev" 00:02:46.401 vdpa/ifc: not in enabled drivers build config 00:02:46.401 vdpa/mlx5: not in enabled drivers build config 00:02:46.401 vdpa/nfp: not in enabled drivers build config 00:02:46.401 vdpa/sfc: not in enabled drivers build config 00:02:46.401 event/*: missing internal dependency, "eventdev" 00:02:46.401 baseband/*: missing internal dependency, "bbdev" 00:02:46.402 gpu/*: missing internal dependency, "gpudev" 00:02:46.402 00:02:46.402 00:02:46.402 Build targets in project: 84 00:02:46.402 00:02:46.402 DPDK 24.03.0 00:02:46.402 00:02:46.402 User defined options 00:02:46.402 buildtype : debug 00:02:46.402 default_library : shared 00:02:46.402 libdir : lib 00:02:46.402 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:46.402 b_sanitize : address 00:02:46.402 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:46.402 c_link_args : 00:02:46.402 cpu_instruction_set: native 00:02:46.402 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:46.402 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:46.402 enable_docs : false 00:02:46.402 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:46.402 enable_kmods : false 00:02:46.402 max_lcores : 128 00:02:46.402 tests : false 00:02:46.402 00:02:46.402 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:46.402 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:46.402 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:46.402 [2/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:46.402 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:46.402 [4/267] Linking static target lib/librte_kvargs.a 00:02:46.402 [5/267] Linking static target lib/librte_log.a 00:02:46.660 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:46.660 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:46.660 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:46.918 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:46.918 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:46.918 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:46.918 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:46.918 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:46.918 [14/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.918 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:47.176 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:47.176 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:47.176 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:47.176 [19/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:47.176 [20/267] Linking static target lib/librte_telemetry.a 00:02:47.176 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:47.434 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:47.434 [23/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.434 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:47.434 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:47.434 [26/267] Linking target lib/librte_log.so.24.1 00:02:47.434 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:47.434 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:47.434 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:47.434 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:47.692 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:47.692 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:47.692 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:47.692 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:47.692 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:47.692 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:47.692 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:47.949 [38/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:47.950 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:47.950 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:47.950 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:47.950 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.950 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:47.950 [44/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.950 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.950 [46/267] Linking target lib/librte_telemetry.so.24.1 00:02:48.207 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:48.207 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:48.207 [49/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:48.207 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:48.207 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:48.207 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:48.207 [53/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:48.207 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:48.464 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:48.464 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:48.464 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:48.464 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:48.464 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:48.464 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:48.464 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:48.721 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:48.721 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:48.721 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:48.721 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:48.721 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:48.722 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:48.979 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:48.979 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:48.979 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:48.979 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.979 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:48.979 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:48.979 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.979 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:49.237 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:49.237 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:49.237 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:49.237 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:49.237 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:49.496 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:49.496 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:49.496 [83/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:49.496 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:49.496 [85/267] Linking static target lib/librte_ring.a 00:02:49.496 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:49.496 [87/267] Linking static target lib/librte_eal.a 00:02:49.782 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:49.782 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.782 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.782 [91/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.782 [92/267] Linking static target lib/librte_rcu.a 00:02:49.782 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:49.782 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.782 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:49.782 [96/267] Linking static target lib/librte_mempool.a 00:02:50.041 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.041 [98/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:50.041 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:50.299 [100/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.299 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:50.299 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:50.299 [103/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:50.299 [104/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:50.299 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:50.299 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:50.299 [107/267] Linking static target lib/librte_meter.a 00:02:50.299 [108/267] Linking static target lib/librte_net.a 00:02:50.557 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:50.557 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:50.557 [111/267] Linking static target lib/librte_mbuf.a 00:02:50.815 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.815 [113/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.815 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:50.815 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.815 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:50.815 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.815 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:51.074 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:51.074 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:51.332 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:51.332 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:51.333 [123/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.333 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:51.333 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:51.333 [126/267] Linking static target lib/librte_pci.a 00:02:51.333 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:51.591 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:51.591 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:51.591 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:51.591 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.591 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:51.591 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:51.591 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:51.591 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:51.591 [136/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.591 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:51.850 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:51.850 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:51.850 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:51.850 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.850 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:51.850 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:51.850 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.850 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.850 [146/267] Linking static target lib/librte_cmdline.a 00:02:51.850 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:52.108 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:52.108 [149/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:52.108 [150/267] Linking static target lib/librte_timer.a 00:02:52.108 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:52.108 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:52.108 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:52.367 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:52.367 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.367 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:52.625 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:52.625 [158/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:52.625 [159/267] Linking static target lib/librte_compressdev.a 00:02:52.625 [160/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.625 [161/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.625 [162/267] Linking static target lib/librte_hash.a 00:02:52.625 [163/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:52.625 [164/267] Linking static target lib/librte_ethdev.a 00:02:52.625 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:52.625 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.625 [167/267] Linking static target lib/librte_dmadev.a 00:02:52.883 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.883 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.883 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:52.883 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:53.141 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:53.141 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.141 [174/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:53.141 [175/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:53.141 [176/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.399 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.399 [178/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.399 [179/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.399 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.399 [181/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.399 [182/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:53.399 [183/267] Linking static target lib/librte_cryptodev.a 00:02:53.657 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.657 [185/267] Linking static target lib/librte_power.a 00:02:53.657 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.657 [187/267] Linking static target lib/librte_reorder.a 00:02:53.657 [188/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.657 [189/267] Linking static target lib/librte_security.a 00:02:53.915 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:53.915 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.915 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:54.173 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.173 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.430 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.430 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.430 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:54.430 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:54.688 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:54.688 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:54.688 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:54.688 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:54.688 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:54.688 [204/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:54.945 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:54.945 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:54.945 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:54.945 [208/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:54.945 [209/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:55.203 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:55.203 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:55.203 [212/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.203 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.203 [214/267] Linking static target drivers/librte_bus_vdev.a 00:02:55.203 [215/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.203 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.204 [217/267] Linking static target drivers/librte_bus_pci.a 00:02:55.204 [218/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:55.204 [219/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:55.461 [220/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.461 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:55.461 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.461 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.461 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:55.461 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.719 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.976 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:56.909 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.909 [229/267] Linking target lib/librte_eal.so.24.1 00:02:56.909 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:57.166 [231/267] Linking target lib/librte_pci.so.24.1 00:02:57.166 [232/267] Linking target lib/librte_meter.so.24.1 00:02:57.166 [233/267] Linking target lib/librte_timer.so.24.1 00:02:57.166 [234/267] Linking target lib/librte_ring.so.24.1 00:02:57.166 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:57.166 [236/267] Linking target lib/librte_dmadev.so.24.1 00:02:57.166 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:57.166 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:57.166 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:57.166 [240/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:57.166 [241/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:57.166 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:57.166 [243/267] Linking target lib/librte_rcu.so.24.1 00:02:57.166 [244/267] Linking target lib/librte_mempool.so.24.1 00:02:57.166 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:57.166 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:57.424 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:57.424 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:57.424 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:57.424 [250/267] Linking target lib/librte_reorder.so.24.1 00:02:57.424 [251/267] Linking target lib/librte_compressdev.so.24.1 00:02:57.424 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:02:57.424 [253/267] Linking target lib/librte_net.so.24.1 00:02:57.682 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:57.682 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:57.682 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:57.682 [257/267] Linking target lib/librte_hash.so.24.1 00:02:57.682 [258/267] Linking target lib/librte_security.so.24.1 00:02:57.682 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:57.940 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.940 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:58.197 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:58.197 [263/267] Linking target lib/librte_power.so.24.1 00:02:58.762 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:58.762 [265/267] Linking static target lib/librte_vhost.a 00:03:00.182 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.182 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:00.182 INFO: autodetecting backend as ninja 00:03:00.182 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:15.058 CC lib/log/log_flags.o 00:03:15.058 CC lib/log/log.o 00:03:15.058 CC lib/log/log_deprecated.o 00:03:15.058 CC lib/ut/ut.o 00:03:15.058 CC lib/ut_mock/mock.o 00:03:15.058 LIB libspdk_log.a 00:03:15.058 LIB libspdk_ut_mock.a 00:03:15.058 LIB libspdk_ut.a 00:03:15.058 SO libspdk_log.so.7.1 00:03:15.058 SO libspdk_ut_mock.so.6.0 00:03:15.058 SO libspdk_ut.so.2.0 00:03:15.058 SYMLINK libspdk_ut.so 00:03:15.058 SYMLINK libspdk_log.so 00:03:15.058 SYMLINK libspdk_ut_mock.so 00:03:15.058 CC lib/dma/dma.o 00:03:15.058 CC lib/ioat/ioat.o 00:03:15.058 CXX lib/trace_parser/trace.o 00:03:15.058 CC lib/util/base64.o 00:03:15.058 CC lib/util/bit_array.o 00:03:15.058 CC lib/util/crc16.o 00:03:15.058 CC lib/util/cpuset.o 00:03:15.058 CC lib/util/crc32.o 00:03:15.058 CC lib/util/crc32c.o 00:03:15.058 CC lib/util/crc32_ieee.o 00:03:15.058 CC lib/vfio_user/host/vfio_user_pci.o 00:03:15.058 CC lib/util/crc64.o 00:03:15.058 CC lib/util/dif.o 00:03:15.058 CC lib/util/fd.o 00:03:15.058 LIB libspdk_dma.a 00:03:15.058 CC lib/util/fd_group.o 00:03:15.059 SO libspdk_dma.so.5.0 00:03:15.059 CC lib/util/file.o 00:03:15.059 SYMLINK libspdk_dma.so 00:03:15.059 CC lib/util/hexlify.o 00:03:15.059 CC lib/util/iov.o 00:03:15.059 CC lib/util/math.o 00:03:15.059 LIB libspdk_ioat.a 00:03:15.059 CC lib/util/net.o 00:03:15.059 SO libspdk_ioat.so.7.0 00:03:15.059 CC lib/vfio_user/host/vfio_user.o 00:03:15.059 SYMLINK libspdk_ioat.so 00:03:15.059 CC lib/util/pipe.o 00:03:15.059 CC lib/util/strerror_tls.o 00:03:15.059 CC lib/util/string.o 00:03:15.059 CC lib/util/uuid.o 00:03:15.320 CC lib/util/xor.o 00:03:15.320 CC lib/util/zipf.o 00:03:15.320 CC lib/util/md5.o 00:03:15.320 LIB libspdk_vfio_user.a 00:03:15.320 SO libspdk_vfio_user.so.5.0 00:03:15.320 SYMLINK libspdk_vfio_user.so 00:03:15.320 LIB libspdk_util.a 00:03:15.583 SO libspdk_util.so.10.1 00:03:15.583 SYMLINK libspdk_util.so 00:03:15.583 LIB libspdk_trace_parser.a 00:03:15.583 SO libspdk_trace_parser.so.6.0 00:03:15.847 SYMLINK libspdk_trace_parser.so 00:03:15.847 CC lib/json/json_util.o 00:03:15.847 CC lib/json/json_write.o 00:03:15.847 CC lib/json/json_parse.o 00:03:15.847 CC lib/env_dpdk/memory.o 00:03:15.847 CC lib/env_dpdk/env.o 00:03:15.847 CC lib/idxd/idxd.o 00:03:15.847 CC lib/env_dpdk/pci.o 00:03:15.847 CC lib/rdma_utils/rdma_utils.o 00:03:15.847 CC lib/vmd/vmd.o 00:03:15.847 CC lib/conf/conf.o 00:03:15.847 CC lib/vmd/led.o 00:03:15.847 CC lib/idxd/idxd_user.o 00:03:15.847 LIB libspdk_conf.a 00:03:15.847 LIB libspdk_json.a 00:03:15.847 SO libspdk_conf.so.6.0 00:03:16.108 SO libspdk_json.so.6.0 00:03:16.108 LIB libspdk_rdma_utils.a 00:03:16.108 SO libspdk_rdma_utils.so.1.0 00:03:16.108 SYMLINK libspdk_conf.so 00:03:16.108 CC lib/idxd/idxd_kernel.o 00:03:16.108 SYMLINK libspdk_json.so 00:03:16.108 CC lib/env_dpdk/init.o 00:03:16.108 SYMLINK libspdk_rdma_utils.so 00:03:16.108 CC lib/env_dpdk/threads.o 00:03:16.108 CC lib/env_dpdk/pci_ioat.o 00:03:16.108 CC lib/env_dpdk/pci_virtio.o 00:03:16.108 CC lib/env_dpdk/pci_vmd.o 00:03:16.108 CC lib/jsonrpc/jsonrpc_server.o 00:03:16.108 CC lib/env_dpdk/pci_idxd.o 00:03:16.108 CC lib/env_dpdk/pci_event.o 00:03:16.108 CC lib/env_dpdk/sigbus_handler.o 00:03:16.369 CC lib/env_dpdk/pci_dpdk.o 00:03:16.369 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:16.369 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:16.369 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:16.369 CC lib/jsonrpc/jsonrpc_client.o 00:03:16.369 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:16.369 LIB libspdk_idxd.a 00:03:16.369 SO libspdk_idxd.so.12.1 00:03:16.369 LIB libspdk_vmd.a 00:03:16.369 CC lib/rdma_provider/common.o 00:03:16.369 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:16.369 SO libspdk_vmd.so.6.0 00:03:16.369 SYMLINK libspdk_idxd.so 00:03:16.369 SYMLINK libspdk_vmd.so 00:03:16.630 LIB libspdk_rdma_provider.a 00:03:16.630 LIB libspdk_jsonrpc.a 00:03:16.630 SO libspdk_rdma_provider.so.7.0 00:03:16.630 SO libspdk_jsonrpc.so.6.0 00:03:16.630 SYMLINK libspdk_rdma_provider.so 00:03:16.630 SYMLINK libspdk_jsonrpc.so 00:03:16.890 CC lib/rpc/rpc.o 00:03:17.151 LIB libspdk_env_dpdk.a 00:03:17.151 LIB libspdk_rpc.a 00:03:17.151 SO libspdk_rpc.so.6.0 00:03:17.151 SO libspdk_env_dpdk.so.15.1 00:03:17.151 SYMLINK libspdk_rpc.so 00:03:17.412 SYMLINK libspdk_env_dpdk.so 00:03:17.412 CC lib/keyring/keyring_rpc.o 00:03:17.412 CC lib/keyring/keyring.o 00:03:17.412 CC lib/notify/notify.o 00:03:17.412 CC lib/notify/notify_rpc.o 00:03:17.412 CC lib/trace/trace.o 00:03:17.412 CC lib/trace/trace_flags.o 00:03:17.412 CC lib/trace/trace_rpc.o 00:03:17.673 LIB libspdk_notify.a 00:03:17.673 SO libspdk_notify.so.6.0 00:03:17.673 LIB libspdk_keyring.a 00:03:17.673 SYMLINK libspdk_notify.so 00:03:17.673 SO libspdk_keyring.so.2.0 00:03:17.673 LIB libspdk_trace.a 00:03:17.673 SO libspdk_trace.so.11.0 00:03:17.673 SYMLINK libspdk_keyring.so 00:03:17.673 SYMLINK libspdk_trace.so 00:03:17.935 CC lib/thread/thread.o 00:03:17.935 CC lib/thread/iobuf.o 00:03:17.935 CC lib/sock/sock_rpc.o 00:03:17.935 CC lib/sock/sock.o 00:03:18.196 LIB libspdk_sock.a 00:03:18.456 SO libspdk_sock.so.10.0 00:03:18.456 SYMLINK libspdk_sock.so 00:03:18.717 CC lib/nvme/nvme_ctrlr.o 00:03:18.717 CC lib/nvme/nvme_fabric.o 00:03:18.717 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:18.717 CC lib/nvme/nvme_pcie.o 00:03:18.717 CC lib/nvme/nvme_ns_cmd.o 00:03:18.717 CC lib/nvme/nvme_ns.o 00:03:18.717 CC lib/nvme/nvme_qpair.o 00:03:18.717 CC lib/nvme/nvme_pcie_common.o 00:03:18.717 CC lib/nvme/nvme.o 00:03:19.287 CC lib/nvme/nvme_quirks.o 00:03:19.287 CC lib/nvme/nvme_transport.o 00:03:19.287 CC lib/nvme/nvme_discovery.o 00:03:19.287 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:19.287 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:19.547 CC lib/nvme/nvme_tcp.o 00:03:19.547 CC lib/nvme/nvme_opal.o 00:03:19.547 CC lib/nvme/nvme_io_msg.o 00:03:19.547 CC lib/nvme/nvme_poll_group.o 00:03:19.547 LIB libspdk_thread.a 00:03:19.547 SO libspdk_thread.so.11.0 00:03:19.547 SYMLINK libspdk_thread.so 00:03:19.547 CC lib/nvme/nvme_zns.o 00:03:19.547 CC lib/nvme/nvme_stubs.o 00:03:19.808 CC lib/nvme/nvme_auth.o 00:03:19.808 CC lib/nvme/nvme_cuse.o 00:03:19.808 CC lib/nvme/nvme_rdma.o 00:03:20.069 CC lib/blob/blobstore.o 00:03:20.069 CC lib/accel/accel.o 00:03:20.069 CC lib/init/json_config.o 00:03:20.069 CC lib/accel/accel_rpc.o 00:03:20.069 CC lib/virtio/virtio.o 00:03:20.329 CC lib/init/subsystem.o 00:03:20.329 CC lib/init/subsystem_rpc.o 00:03:20.329 CC lib/init/rpc.o 00:03:20.329 CC lib/blob/request.o 00:03:20.329 CC lib/blob/zeroes.o 00:03:20.329 CC lib/virtio/virtio_vhost_user.o 00:03:20.590 LIB libspdk_init.a 00:03:20.590 SO libspdk_init.so.6.0 00:03:20.590 CC lib/virtio/virtio_vfio_user.o 00:03:20.590 CC lib/fsdev/fsdev.o 00:03:20.590 CC lib/fsdev/fsdev_io.o 00:03:20.590 SYMLINK libspdk_init.so 00:03:20.590 CC lib/blob/blob_bs_dev.o 00:03:20.590 CC lib/virtio/virtio_pci.o 00:03:20.590 CC lib/accel/accel_sw.o 00:03:20.851 CC lib/fsdev/fsdev_rpc.o 00:03:20.851 CC lib/event/app.o 00:03:20.851 CC lib/event/reactor.o 00:03:20.851 CC lib/event/log_rpc.o 00:03:20.851 CC lib/event/app_rpc.o 00:03:20.851 CC lib/event/scheduler_static.o 00:03:20.851 LIB libspdk_accel.a 00:03:21.111 SO libspdk_accel.so.16.0 00:03:21.111 LIB libspdk_virtio.a 00:03:21.111 LIB libspdk_nvme.a 00:03:21.111 SO libspdk_virtio.so.7.0 00:03:21.111 SYMLINK libspdk_accel.so 00:03:21.111 SYMLINK libspdk_virtio.so 00:03:21.111 LIB libspdk_fsdev.a 00:03:21.111 LIB libspdk_event.a 00:03:21.111 SO libspdk_nvme.so.15.0 00:03:21.111 SO libspdk_fsdev.so.2.0 00:03:21.111 SO libspdk_event.so.14.0 00:03:21.111 SYMLINK libspdk_fsdev.so 00:03:21.371 SYMLINK libspdk_event.so 00:03:21.371 CC lib/bdev/bdev.o 00:03:21.371 CC lib/bdev/bdev_rpc.o 00:03:21.371 CC lib/bdev/bdev_zone.o 00:03:21.371 CC lib/bdev/part.o 00:03:21.371 CC lib/bdev/scsi_nvme.o 00:03:21.371 SYMLINK libspdk_nvme.so 00:03:21.371 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:21.942 LIB libspdk_fuse_dispatcher.a 00:03:22.203 SO libspdk_fuse_dispatcher.so.1.0 00:03:22.203 SYMLINK libspdk_fuse_dispatcher.so 00:03:23.635 LIB libspdk_blob.a 00:03:23.635 SO libspdk_blob.so.12.0 00:03:23.635 SYMLINK libspdk_blob.so 00:03:23.895 CC lib/blobfs/tree.o 00:03:23.895 CC lib/blobfs/blobfs.o 00:03:23.895 CC lib/lvol/lvol.o 00:03:23.895 LIB libspdk_bdev.a 00:03:23.895 SO libspdk_bdev.so.17.0 00:03:24.155 SYMLINK libspdk_bdev.so 00:03:24.155 CC lib/scsi/dev.o 00:03:24.155 CC lib/scsi/port.o 00:03:24.155 CC lib/ftl/ftl_init.o 00:03:24.155 CC lib/scsi/lun.o 00:03:24.155 CC lib/ftl/ftl_core.o 00:03:24.155 CC lib/nbd/nbd.o 00:03:24.155 CC lib/nvmf/ctrlr.o 00:03:24.155 CC lib/ublk/ublk.o 00:03:24.413 CC lib/ublk/ublk_rpc.o 00:03:24.413 CC lib/scsi/scsi.o 00:03:24.413 CC lib/ftl/ftl_layout.o 00:03:24.413 CC lib/scsi/scsi_bdev.o 00:03:24.413 CC lib/scsi/scsi_pr.o 00:03:24.413 CC lib/scsi/scsi_rpc.o 00:03:24.413 CC lib/nbd/nbd_rpc.o 00:03:24.673 LIB libspdk_lvol.a 00:03:24.673 SO libspdk_lvol.so.11.0 00:03:24.673 CC lib/scsi/task.o 00:03:24.673 SYMLINK libspdk_lvol.so 00:03:24.673 CC lib/nvmf/ctrlr_discovery.o 00:03:24.673 LIB libspdk_blobfs.a 00:03:24.673 CC lib/nvmf/ctrlr_bdev.o 00:03:24.673 LIB libspdk_nbd.a 00:03:24.673 SO libspdk_blobfs.so.11.0 00:03:24.673 SO libspdk_nbd.so.7.0 00:03:24.673 CC lib/ftl/ftl_debug.o 00:03:24.673 SYMLINK libspdk_blobfs.so 00:03:24.673 CC lib/ftl/ftl_io.o 00:03:24.673 CC lib/nvmf/subsystem.o 00:03:24.673 SYMLINK libspdk_nbd.so 00:03:24.673 CC lib/nvmf/nvmf.o 00:03:24.673 CC lib/nvmf/nvmf_rpc.o 00:03:24.934 LIB libspdk_ublk.a 00:03:24.934 SO libspdk_ublk.so.3.0 00:03:24.934 LIB libspdk_scsi.a 00:03:24.934 CC lib/ftl/ftl_sb.o 00:03:24.934 SYMLINK libspdk_ublk.so 00:03:24.934 CC lib/ftl/ftl_l2p.o 00:03:24.934 SO libspdk_scsi.so.9.0 00:03:24.934 CC lib/ftl/ftl_l2p_flat.o 00:03:24.934 SYMLINK libspdk_scsi.so 00:03:24.934 CC lib/ftl/ftl_nv_cache.o 00:03:24.934 CC lib/ftl/ftl_band.o 00:03:24.934 CC lib/nvmf/transport.o 00:03:24.934 CC lib/ftl/ftl_band_ops.o 00:03:25.196 CC lib/ftl/ftl_writer.o 00:03:25.196 CC lib/ftl/ftl_rq.o 00:03:25.196 CC lib/nvmf/tcp.o 00:03:25.457 CC lib/nvmf/stubs.o 00:03:25.457 CC lib/nvmf/mdns_server.o 00:03:25.457 CC lib/nvmf/rdma.o 00:03:25.718 CC lib/nvmf/auth.o 00:03:25.718 CC lib/ftl/ftl_reloc.o 00:03:25.718 CC lib/ftl/ftl_l2p_cache.o 00:03:25.718 CC lib/iscsi/conn.o 00:03:25.718 CC lib/iscsi/init_grp.o 00:03:25.979 CC lib/ftl/ftl_p2l.o 00:03:25.979 CC lib/ftl/ftl_p2l_log.o 00:03:25.979 CC lib/iscsi/iscsi.o 00:03:25.979 CC lib/ftl/mngt/ftl_mngt.o 00:03:26.240 CC lib/vhost/vhost.o 00:03:26.240 CC lib/iscsi/param.o 00:03:26.240 CC lib/iscsi/portal_grp.o 00:03:26.240 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:26.240 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:26.240 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:26.501 CC lib/iscsi/tgt_node.o 00:03:26.501 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:26.501 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:26.501 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:26.501 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:26.501 CC lib/vhost/vhost_rpc.o 00:03:26.501 CC lib/iscsi/iscsi_subsystem.o 00:03:26.760 CC lib/iscsi/iscsi_rpc.o 00:03:26.760 CC lib/iscsi/task.o 00:03:26.760 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:26.760 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:26.760 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:26.760 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:26.760 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:27.019 CC lib/ftl/utils/ftl_conf.o 00:03:27.019 CC lib/vhost/vhost_scsi.o 00:03:27.019 CC lib/ftl/utils/ftl_md.o 00:03:27.019 CC lib/vhost/vhost_blk.o 00:03:27.019 CC lib/vhost/rte_vhost_user.o 00:03:27.019 CC lib/ftl/utils/ftl_mempool.o 00:03:27.019 CC lib/ftl/utils/ftl_bitmap.o 00:03:27.277 CC lib/ftl/utils/ftl_property.o 00:03:27.277 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:27.277 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:27.277 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:27.277 LIB libspdk_nvmf.a 00:03:27.277 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:27.277 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:27.277 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:27.277 SO libspdk_nvmf.so.20.0 00:03:27.536 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:27.536 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:27.536 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:27.536 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:27.536 LIB libspdk_iscsi.a 00:03:27.536 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:27.536 SO libspdk_iscsi.so.8.0 00:03:27.536 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:27.536 SYMLINK libspdk_nvmf.so 00:03:27.536 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:27.536 CC lib/ftl/base/ftl_base_dev.o 00:03:27.536 CC lib/ftl/base/ftl_base_bdev.o 00:03:27.536 CC lib/ftl/ftl_trace.o 00:03:27.796 SYMLINK libspdk_iscsi.so 00:03:27.796 LIB libspdk_ftl.a 00:03:28.058 SO libspdk_ftl.so.9.0 00:03:28.058 LIB libspdk_vhost.a 00:03:28.058 SO libspdk_vhost.so.8.0 00:03:28.058 SYMLINK libspdk_ftl.so 00:03:28.058 SYMLINK libspdk_vhost.so 00:03:28.630 CC module/env_dpdk/env_dpdk_rpc.o 00:03:28.630 CC module/keyring/file/keyring.o 00:03:28.630 CC module/keyring/linux/keyring.o 00:03:28.630 CC module/sock/posix/posix.o 00:03:28.630 CC module/fsdev/aio/fsdev_aio.o 00:03:28.630 CC module/blob/bdev/blob_bdev.o 00:03:28.630 CC module/accel/error/accel_error.o 00:03:28.630 CC module/accel/dsa/accel_dsa.o 00:03:28.630 CC module/accel/ioat/accel_ioat.o 00:03:28.630 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:28.630 LIB libspdk_env_dpdk_rpc.a 00:03:28.630 SO libspdk_env_dpdk_rpc.so.6.0 00:03:28.630 CC module/keyring/linux/keyring_rpc.o 00:03:28.630 SYMLINK libspdk_env_dpdk_rpc.so 00:03:28.630 CC module/accel/ioat/accel_ioat_rpc.o 00:03:28.630 CC module/keyring/file/keyring_rpc.o 00:03:28.630 CC module/accel/error/accel_error_rpc.o 00:03:28.630 LIB libspdk_scheduler_dynamic.a 00:03:28.630 LIB libspdk_blob_bdev.a 00:03:28.630 LIB libspdk_keyring_linux.a 00:03:28.630 LIB libspdk_keyring_file.a 00:03:28.630 SO libspdk_blob_bdev.so.12.0 00:03:28.630 SO libspdk_scheduler_dynamic.so.4.0 00:03:28.891 LIB libspdk_accel_ioat.a 00:03:28.891 SO libspdk_keyring_linux.so.1.0 00:03:28.891 CC module/accel/dsa/accel_dsa_rpc.o 00:03:28.891 SO libspdk_keyring_file.so.2.0 00:03:28.891 SO libspdk_accel_ioat.so.6.0 00:03:28.891 SYMLINK libspdk_blob_bdev.so 00:03:28.891 SYMLINK libspdk_scheduler_dynamic.so 00:03:28.891 LIB libspdk_accel_error.a 00:03:28.891 SYMLINK libspdk_keyring_linux.so 00:03:28.891 SYMLINK libspdk_keyring_file.so 00:03:28.891 SO libspdk_accel_error.so.2.0 00:03:28.891 SYMLINK libspdk_accel_ioat.so 00:03:28.891 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:28.891 CC module/accel/iaa/accel_iaa.o 00:03:28.891 LIB libspdk_accel_dsa.a 00:03:28.891 SYMLINK libspdk_accel_error.so 00:03:28.891 CC module/accel/iaa/accel_iaa_rpc.o 00:03:28.891 SO libspdk_accel_dsa.so.5.0 00:03:28.891 CC module/scheduler/gscheduler/gscheduler.o 00:03:28.891 CC module/fsdev/aio/linux_aio_mgr.o 00:03:28.891 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:28.891 SYMLINK libspdk_accel_dsa.so 00:03:29.152 CC module/blobfs/bdev/blobfs_bdev.o 00:03:29.152 LIB libspdk_accel_iaa.a 00:03:29.152 CC module/bdev/delay/vbdev_delay.o 00:03:29.152 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:29.152 SO libspdk_accel_iaa.so.3.0 00:03:29.152 LIB libspdk_scheduler_gscheduler.a 00:03:29.152 LIB libspdk_scheduler_dpdk_governor.a 00:03:29.152 SO libspdk_scheduler_gscheduler.so.4.0 00:03:29.152 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:29.152 SYMLINK libspdk_accel_iaa.so 00:03:29.152 CC module/bdev/error/vbdev_error.o 00:03:29.152 LIB libspdk_sock_posix.a 00:03:29.152 LIB libspdk_fsdev_aio.a 00:03:29.152 CC module/bdev/gpt/gpt.o 00:03:29.152 CC module/bdev/gpt/vbdev_gpt.o 00:03:29.152 SYMLINK libspdk_scheduler_gscheduler.so 00:03:29.152 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:29.152 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:29.152 SO libspdk_sock_posix.so.6.0 00:03:29.152 SO libspdk_fsdev_aio.so.1.0 00:03:29.152 CC module/bdev/error/vbdev_error_rpc.o 00:03:29.152 SYMLINK libspdk_sock_posix.so 00:03:29.152 SYMLINK libspdk_fsdev_aio.so 00:03:29.413 LIB libspdk_blobfs_bdev.a 00:03:29.413 SO libspdk_blobfs_bdev.so.6.0 00:03:29.413 LIB libspdk_bdev_error.a 00:03:29.413 LIB libspdk_bdev_delay.a 00:03:29.413 CC module/bdev/malloc/bdev_malloc.o 00:03:29.413 SO libspdk_bdev_delay.so.6.0 00:03:29.413 SO libspdk_bdev_error.so.6.0 00:03:29.413 SYMLINK libspdk_blobfs_bdev.so 00:03:29.413 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:29.413 CC module/bdev/lvol/vbdev_lvol.o 00:03:29.413 CC module/bdev/null/bdev_null.o 00:03:29.413 LIB libspdk_bdev_gpt.a 00:03:29.413 CC module/bdev/nvme/bdev_nvme.o 00:03:29.413 SYMLINK libspdk_bdev_delay.so 00:03:29.413 SYMLINK libspdk_bdev_error.so 00:03:29.413 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:29.413 SO libspdk_bdev_gpt.so.6.0 00:03:29.413 CC module/bdev/passthru/vbdev_passthru.o 00:03:29.413 SYMLINK libspdk_bdev_gpt.so 00:03:29.413 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:29.413 CC module/bdev/raid/bdev_raid.o 00:03:29.413 CC module/bdev/raid/bdev_raid_rpc.o 00:03:29.674 CC module/bdev/split/vbdev_split.o 00:03:29.674 CC module/bdev/null/bdev_null_rpc.o 00:03:29.674 CC module/bdev/split/vbdev_split_rpc.o 00:03:29.674 LIB libspdk_bdev_null.a 00:03:29.674 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:29.674 SO libspdk_bdev_null.so.6.0 00:03:29.674 LIB libspdk_bdev_passthru.a 00:03:29.674 LIB libspdk_bdev_malloc.a 00:03:29.675 SO libspdk_bdev_passthru.so.6.0 00:03:29.675 SO libspdk_bdev_malloc.so.6.0 00:03:29.675 CC module/bdev/raid/bdev_raid_sb.o 00:03:29.675 SYMLINK libspdk_bdev_null.so 00:03:29.934 LIB libspdk_bdev_split.a 00:03:29.935 SYMLINK libspdk_bdev_passthru.so 00:03:29.935 SO libspdk_bdev_split.so.6.0 00:03:29.935 SYMLINK libspdk_bdev_malloc.so 00:03:29.935 SYMLINK libspdk_bdev_split.so 00:03:29.935 CC module/bdev/raid/raid0.o 00:03:29.935 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:29.935 CC module/bdev/xnvme/bdev_xnvme.o 00:03:29.935 CC module/bdev/aio/bdev_aio.o 00:03:29.935 CC module/bdev/ftl/bdev_ftl.o 00:03:29.935 LIB libspdk_bdev_lvol.a 00:03:29.935 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:29.935 SO libspdk_bdev_lvol.so.6.0 00:03:30.195 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:30.195 SYMLINK libspdk_bdev_lvol.so 00:03:30.195 CC module/bdev/aio/bdev_aio_rpc.o 00:03:30.195 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:30.195 CC module/bdev/raid/raid1.o 00:03:30.195 CC module/bdev/nvme/nvme_rpc.o 00:03:30.195 LIB libspdk_bdev_ftl.a 00:03:30.195 LIB libspdk_bdev_xnvme.a 00:03:30.195 LIB libspdk_bdev_zone_block.a 00:03:30.195 SO libspdk_bdev_xnvme.so.3.0 00:03:30.195 SO libspdk_bdev_ftl.so.6.0 00:03:30.195 SO libspdk_bdev_zone_block.so.6.0 00:03:30.195 LIB libspdk_bdev_aio.a 00:03:30.455 SO libspdk_bdev_aio.so.6.0 00:03:30.455 SYMLINK libspdk_bdev_zone_block.so 00:03:30.455 SYMLINK libspdk_bdev_ftl.so 00:03:30.455 SYMLINK libspdk_bdev_aio.so 00:03:30.455 CC module/bdev/raid/concat.o 00:03:30.455 SYMLINK libspdk_bdev_xnvme.so 00:03:30.455 CC module/bdev/iscsi/bdev_iscsi.o 00:03:30.455 CC module/bdev/nvme/bdev_mdns_client.o 00:03:30.455 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:30.455 CC module/bdev/nvme/vbdev_opal.o 00:03:30.455 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:30.455 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:30.455 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:30.455 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:30.455 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:30.455 LIB libspdk_bdev_raid.a 00:03:30.714 SO libspdk_bdev_raid.so.6.0 00:03:30.714 SYMLINK libspdk_bdev_raid.so 00:03:30.714 LIB libspdk_bdev_iscsi.a 00:03:30.714 SO libspdk_bdev_iscsi.so.6.0 00:03:30.714 SYMLINK libspdk_bdev_iscsi.so 00:03:30.974 LIB libspdk_bdev_virtio.a 00:03:30.974 SO libspdk_bdev_virtio.so.6.0 00:03:30.974 SYMLINK libspdk_bdev_virtio.so 00:03:31.543 LIB libspdk_bdev_nvme.a 00:03:31.543 SO libspdk_bdev_nvme.so.7.1 00:03:31.804 SYMLINK libspdk_bdev_nvme.so 00:03:32.065 CC module/event/subsystems/sock/sock.o 00:03:32.065 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:32.065 CC module/event/subsystems/iobuf/iobuf.o 00:03:32.065 CC module/event/subsystems/fsdev/fsdev.o 00:03:32.065 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:32.065 CC module/event/subsystems/keyring/keyring.o 00:03:32.065 CC module/event/subsystems/vmd/vmd.o 00:03:32.065 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:32.065 CC module/event/subsystems/scheduler/scheduler.o 00:03:32.329 LIB libspdk_event_vmd.a 00:03:32.329 LIB libspdk_event_fsdev.a 00:03:32.329 LIB libspdk_event_scheduler.a 00:03:32.329 LIB libspdk_event_keyring.a 00:03:32.329 LIB libspdk_event_vhost_blk.a 00:03:32.329 LIB libspdk_event_sock.a 00:03:32.329 SO libspdk_event_vmd.so.6.0 00:03:32.329 SO libspdk_event_fsdev.so.1.0 00:03:32.329 LIB libspdk_event_iobuf.a 00:03:32.329 SO libspdk_event_sock.so.5.0 00:03:32.329 SO libspdk_event_keyring.so.1.0 00:03:32.329 SO libspdk_event_vhost_blk.so.3.0 00:03:32.329 SO libspdk_event_scheduler.so.4.0 00:03:32.329 SO libspdk_event_iobuf.so.3.0 00:03:32.329 SYMLINK libspdk_event_keyring.so 00:03:32.329 SYMLINK libspdk_event_vhost_blk.so 00:03:32.329 SYMLINK libspdk_event_sock.so 00:03:32.329 SYMLINK libspdk_event_fsdev.so 00:03:32.329 SYMLINK libspdk_event_vmd.so 00:03:32.329 SYMLINK libspdk_event_scheduler.so 00:03:32.329 SYMLINK libspdk_event_iobuf.so 00:03:32.594 CC module/event/subsystems/accel/accel.o 00:03:32.594 LIB libspdk_event_accel.a 00:03:32.594 SO libspdk_event_accel.so.6.0 00:03:32.852 SYMLINK libspdk_event_accel.so 00:03:33.114 CC module/event/subsystems/bdev/bdev.o 00:03:33.114 LIB libspdk_event_bdev.a 00:03:33.114 SO libspdk_event_bdev.so.6.0 00:03:33.373 SYMLINK libspdk_event_bdev.so 00:03:33.373 CC module/event/subsystems/scsi/scsi.o 00:03:33.373 CC module/event/subsystems/ublk/ublk.o 00:03:33.373 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:33.373 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:33.373 CC module/event/subsystems/nbd/nbd.o 00:03:33.632 LIB libspdk_event_ublk.a 00:03:33.632 LIB libspdk_event_nbd.a 00:03:33.632 LIB libspdk_event_scsi.a 00:03:33.632 SO libspdk_event_ublk.so.3.0 00:03:33.632 SO libspdk_event_nbd.so.6.0 00:03:33.632 SO libspdk_event_scsi.so.6.0 00:03:33.632 SYMLINK libspdk_event_ublk.so 00:03:33.633 LIB libspdk_event_nvmf.a 00:03:33.633 SYMLINK libspdk_event_nbd.so 00:03:33.633 SYMLINK libspdk_event_scsi.so 00:03:33.633 SO libspdk_event_nvmf.so.6.0 00:03:33.633 SYMLINK libspdk_event_nvmf.so 00:03:33.893 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:33.893 CC module/event/subsystems/iscsi/iscsi.o 00:03:33.893 LIB libspdk_event_vhost_scsi.a 00:03:33.893 SO libspdk_event_vhost_scsi.so.3.0 00:03:33.893 LIB libspdk_event_iscsi.a 00:03:34.154 SO libspdk_event_iscsi.so.6.0 00:03:34.154 SYMLINK libspdk_event_vhost_scsi.so 00:03:34.154 SYMLINK libspdk_event_iscsi.so 00:03:34.154 SO libspdk.so.6.0 00:03:34.154 SYMLINK libspdk.so 00:03:34.415 CC app/trace_record/trace_record.o 00:03:34.415 CXX app/trace/trace.o 00:03:34.415 CC app/spdk_lspci/spdk_lspci.o 00:03:34.415 CC app/nvmf_tgt/nvmf_main.o 00:03:34.415 CC app/iscsi_tgt/iscsi_tgt.o 00:03:34.415 CC app/spdk_tgt/spdk_tgt.o 00:03:34.415 CC test/thread/poller_perf/poller_perf.o 00:03:34.415 CC examples/util/zipf/zipf.o 00:03:34.415 CC test/app/bdev_svc/bdev_svc.o 00:03:34.676 LINK spdk_lspci 00:03:34.676 CC test/dma/test_dma/test_dma.o 00:03:34.676 LINK nvmf_tgt 00:03:34.676 LINK zipf 00:03:34.676 LINK poller_perf 00:03:34.676 LINK iscsi_tgt 00:03:34.676 LINK spdk_trace_record 00:03:34.676 LINK spdk_tgt 00:03:34.676 LINK bdev_svc 00:03:34.676 TEST_HEADER include/spdk/accel.h 00:03:34.676 TEST_HEADER include/spdk/accel_module.h 00:03:34.676 TEST_HEADER include/spdk/assert.h 00:03:34.676 TEST_HEADER include/spdk/barrier.h 00:03:34.676 TEST_HEADER include/spdk/base64.h 00:03:34.676 TEST_HEADER include/spdk/bdev.h 00:03:34.676 TEST_HEADER include/spdk/bdev_module.h 00:03:34.676 TEST_HEADER include/spdk/bdev_zone.h 00:03:34.676 TEST_HEADER include/spdk/bit_array.h 00:03:34.676 TEST_HEADER include/spdk/bit_pool.h 00:03:34.676 TEST_HEADER include/spdk/blob_bdev.h 00:03:34.676 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:34.676 TEST_HEADER include/spdk/blobfs.h 00:03:34.676 TEST_HEADER include/spdk/blob.h 00:03:34.676 TEST_HEADER include/spdk/conf.h 00:03:34.676 TEST_HEADER include/spdk/config.h 00:03:34.676 TEST_HEADER include/spdk/cpuset.h 00:03:34.676 TEST_HEADER include/spdk/crc16.h 00:03:34.676 TEST_HEADER include/spdk/crc32.h 00:03:34.676 TEST_HEADER include/spdk/crc64.h 00:03:34.676 TEST_HEADER include/spdk/dif.h 00:03:34.676 TEST_HEADER include/spdk/dma.h 00:03:34.676 TEST_HEADER include/spdk/endian.h 00:03:34.676 TEST_HEADER include/spdk/env_dpdk.h 00:03:34.676 TEST_HEADER include/spdk/env.h 00:03:34.676 TEST_HEADER include/spdk/event.h 00:03:34.676 TEST_HEADER include/spdk/fd_group.h 00:03:34.676 TEST_HEADER include/spdk/fd.h 00:03:34.676 TEST_HEADER include/spdk/file.h 00:03:34.676 TEST_HEADER include/spdk/fsdev.h 00:03:34.676 TEST_HEADER include/spdk/fsdev_module.h 00:03:34.676 TEST_HEADER include/spdk/ftl.h 00:03:34.676 TEST_HEADER include/spdk/gpt_spec.h 00:03:34.676 TEST_HEADER include/spdk/hexlify.h 00:03:34.676 TEST_HEADER include/spdk/histogram_data.h 00:03:34.676 TEST_HEADER include/spdk/idxd.h 00:03:34.676 TEST_HEADER include/spdk/idxd_spec.h 00:03:34.676 TEST_HEADER include/spdk/init.h 00:03:34.676 TEST_HEADER include/spdk/ioat.h 00:03:34.676 LINK spdk_trace 00:03:34.676 TEST_HEADER include/spdk/ioat_spec.h 00:03:34.676 TEST_HEADER include/spdk/iscsi_spec.h 00:03:34.676 TEST_HEADER include/spdk/json.h 00:03:34.676 TEST_HEADER include/spdk/jsonrpc.h 00:03:34.676 TEST_HEADER include/spdk/keyring.h 00:03:34.676 TEST_HEADER include/spdk/keyring_module.h 00:03:34.676 TEST_HEADER include/spdk/likely.h 00:03:34.676 TEST_HEADER include/spdk/log.h 00:03:34.676 TEST_HEADER include/spdk/lvol.h 00:03:34.676 TEST_HEADER include/spdk/md5.h 00:03:34.676 TEST_HEADER include/spdk/memory.h 00:03:34.676 TEST_HEADER include/spdk/mmio.h 00:03:34.676 TEST_HEADER include/spdk/nbd.h 00:03:34.676 TEST_HEADER include/spdk/net.h 00:03:34.676 TEST_HEADER include/spdk/notify.h 00:03:34.676 TEST_HEADER include/spdk/nvme.h 00:03:34.676 TEST_HEADER include/spdk/nvme_intel.h 00:03:34.676 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:34.937 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:34.937 TEST_HEADER include/spdk/nvme_spec.h 00:03:34.937 TEST_HEADER include/spdk/nvme_zns.h 00:03:34.937 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:34.937 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:34.937 TEST_HEADER include/spdk/nvmf.h 00:03:34.937 TEST_HEADER include/spdk/nvmf_spec.h 00:03:34.937 TEST_HEADER include/spdk/nvmf_transport.h 00:03:34.937 TEST_HEADER include/spdk/opal.h 00:03:34.937 TEST_HEADER include/spdk/opal_spec.h 00:03:34.937 TEST_HEADER include/spdk/pci_ids.h 00:03:34.937 CC app/spdk_nvme_perf/perf.o 00:03:34.937 TEST_HEADER include/spdk/pipe.h 00:03:34.937 TEST_HEADER include/spdk/queue.h 00:03:34.937 TEST_HEADER include/spdk/reduce.h 00:03:34.937 TEST_HEADER include/spdk/rpc.h 00:03:34.937 TEST_HEADER include/spdk/scheduler.h 00:03:34.937 TEST_HEADER include/spdk/scsi.h 00:03:34.937 TEST_HEADER include/spdk/scsi_spec.h 00:03:34.937 TEST_HEADER include/spdk/sock.h 00:03:34.937 TEST_HEADER include/spdk/stdinc.h 00:03:34.937 CC app/spdk_nvme_identify/identify.o 00:03:34.937 TEST_HEADER include/spdk/string.h 00:03:34.937 TEST_HEADER include/spdk/thread.h 00:03:34.937 TEST_HEADER include/spdk/trace.h 00:03:34.937 TEST_HEADER include/spdk/trace_parser.h 00:03:34.937 TEST_HEADER include/spdk/tree.h 00:03:34.937 TEST_HEADER include/spdk/ublk.h 00:03:34.937 TEST_HEADER include/spdk/util.h 00:03:34.937 TEST_HEADER include/spdk/uuid.h 00:03:34.937 TEST_HEADER include/spdk/version.h 00:03:34.937 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:34.937 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:34.937 TEST_HEADER include/spdk/vhost.h 00:03:34.937 TEST_HEADER include/spdk/vmd.h 00:03:34.937 TEST_HEADER include/spdk/xor.h 00:03:34.937 TEST_HEADER include/spdk/zipf.h 00:03:34.937 CC app/spdk_nvme_discover/discovery_aer.o 00:03:34.937 CXX test/cpp_headers/accel.o 00:03:34.937 CC examples/ioat/perf/perf.o 00:03:34.937 CC app/spdk_top/spdk_top.o 00:03:34.937 CC examples/ioat/verify/verify.o 00:03:34.937 CC test/app/histogram_perf/histogram_perf.o 00:03:34.937 CXX test/cpp_headers/accel_module.o 00:03:34.937 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:34.937 LINK test_dma 00:03:35.199 LINK spdk_nvme_discover 00:03:35.199 LINK histogram_perf 00:03:35.199 CXX test/cpp_headers/assert.o 00:03:35.199 LINK ioat_perf 00:03:35.199 LINK verify 00:03:35.199 CXX test/cpp_headers/barrier.o 00:03:35.199 CXX test/cpp_headers/base64.o 00:03:35.199 CC app/spdk_dd/spdk_dd.o 00:03:35.199 CC app/vhost/vhost.o 00:03:35.460 CXX test/cpp_headers/bdev.o 00:03:35.460 CC app/fio/nvme/fio_plugin.o 00:03:35.460 CC examples/vmd/lsvmd/lsvmd.o 00:03:35.460 LINK nvme_fuzz 00:03:35.460 CC examples/vmd/led/led.o 00:03:35.460 CXX test/cpp_headers/bdev_module.o 00:03:35.460 LINK spdk_nvme_identify 00:03:35.460 LINK vhost 00:03:35.460 LINK lsvmd 00:03:35.460 LINK led 00:03:35.721 CXX test/cpp_headers/bdev_zone.o 00:03:35.721 CXX test/cpp_headers/bit_array.o 00:03:35.721 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:35.721 CXX test/cpp_headers/bit_pool.o 00:03:35.721 LINK spdk_nvme_perf 00:03:35.721 LINK spdk_dd 00:03:35.721 LINK spdk_top 00:03:35.721 CXX test/cpp_headers/blob_bdev.o 00:03:35.721 CC test/env/vtophys/vtophys.o 00:03:35.721 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:35.982 CC examples/idxd/perf/perf.o 00:03:35.982 CC test/env/mem_callbacks/mem_callbacks.o 00:03:35.982 CC test/env/memory/memory_ut.o 00:03:35.982 CC test/env/pci/pci_ut.o 00:03:35.982 LINK spdk_nvme 00:03:35.982 LINK vtophys 00:03:35.982 CXX test/cpp_headers/blobfs_bdev.o 00:03:35.982 LINK env_dpdk_post_init 00:03:35.982 CC app/fio/bdev/fio_plugin.o 00:03:35.982 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:36.243 CXX test/cpp_headers/blobfs.o 00:03:36.243 LINK idxd_perf 00:03:36.243 CC test/app/jsoncat/jsoncat.o 00:03:36.243 CC test/app/stub/stub.o 00:03:36.243 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:36.243 CXX test/cpp_headers/blob.o 00:03:36.243 LINK jsoncat 00:03:36.243 LINK pci_ut 00:03:36.243 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:36.243 LINK stub 00:03:36.243 CXX test/cpp_headers/conf.o 00:03:36.243 LINK mem_callbacks 00:03:36.503 LINK interrupt_tgt 00:03:36.503 LINK spdk_bdev 00:03:36.503 CXX test/cpp_headers/config.o 00:03:36.503 CC examples/thread/thread/thread_ex.o 00:03:36.503 CXX test/cpp_headers/cpuset.o 00:03:36.503 LINK vhost_fuzz 00:03:36.503 CC test/event/event_perf/event_perf.o 00:03:36.503 CC examples/sock/hello_world/hello_sock.o 00:03:36.762 CC test/event/reactor/reactor.o 00:03:36.762 CC test/nvme/aer/aer.o 00:03:36.762 CC test/event/reactor_perf/reactor_perf.o 00:03:36.762 CXX test/cpp_headers/crc16.o 00:03:36.762 LINK thread 00:03:36.762 CC test/nvme/reset/reset.o 00:03:36.762 LINK event_perf 00:03:36.762 LINK reactor 00:03:36.762 LINK reactor_perf 00:03:36.762 CXX test/cpp_headers/crc32.o 00:03:36.762 LINK hello_sock 00:03:36.762 CC test/nvme/sgl/sgl.o 00:03:36.762 LINK aer 00:03:37.022 LINK memory_ut 00:03:37.022 CXX test/cpp_headers/crc64.o 00:03:37.022 CC test/event/app_repeat/app_repeat.o 00:03:37.022 LINK reset 00:03:37.022 CC test/rpc_client/rpc_client_test.o 00:03:37.022 CC test/nvme/e2edp/nvme_dp.o 00:03:37.022 LINK sgl 00:03:37.022 CC examples/accel/perf/accel_perf.o 00:03:37.022 CC test/accel/dif/dif.o 00:03:37.022 LINK app_repeat 00:03:37.022 CXX test/cpp_headers/dif.o 00:03:37.022 LINK rpc_client_test 00:03:37.282 CC test/nvme/overhead/overhead.o 00:03:37.282 CXX test/cpp_headers/dma.o 00:03:37.282 CXX test/cpp_headers/endian.o 00:03:37.282 LINK nvme_dp 00:03:37.282 CC examples/blob/hello_world/hello_blob.o 00:03:37.282 CC examples/blob/cli/blobcli.o 00:03:37.282 LINK iscsi_fuzz 00:03:37.283 CC test/event/scheduler/scheduler.o 00:03:37.283 CXX test/cpp_headers/env_dpdk.o 00:03:37.283 CXX test/cpp_headers/env.o 00:03:37.283 LINK overhead 00:03:37.541 LINK hello_blob 00:03:37.542 LINK accel_perf 00:03:37.542 CXX test/cpp_headers/event.o 00:03:37.542 CC examples/nvme/hello_world/hello_world.o 00:03:37.542 CC examples/nvme/reconnect/reconnect.o 00:03:37.542 LINK scheduler 00:03:37.542 CC test/nvme/err_injection/err_injection.o 00:03:37.542 CXX test/cpp_headers/fd_group.o 00:03:37.542 CC test/nvme/startup/startup.o 00:03:37.542 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:37.542 LINK blobcli 00:03:37.802 LINK hello_world 00:03:37.802 LINK err_injection 00:03:37.802 CXX test/cpp_headers/fd.o 00:03:37.802 LINK startup 00:03:37.802 LINK dif 00:03:37.802 CC examples/bdev/hello_world/hello_bdev.o 00:03:37.802 LINK reconnect 00:03:37.802 CC examples/bdev/bdevperf/bdevperf.o 00:03:37.802 CXX test/cpp_headers/file.o 00:03:37.802 LINK hello_fsdev 00:03:37.802 CC test/nvme/reserve/reserve.o 00:03:38.063 CC test/nvme/simple_copy/simple_copy.o 00:03:38.063 CC test/blobfs/mkfs/mkfs.o 00:03:38.063 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:38.063 CXX test/cpp_headers/fsdev.o 00:03:38.063 LINK hello_bdev 00:03:38.063 CC test/lvol/esnap/esnap.o 00:03:38.063 CXX test/cpp_headers/fsdev_module.o 00:03:38.063 LINK reserve 00:03:38.063 CC test/bdev/bdevio/bdevio.o 00:03:38.063 CXX test/cpp_headers/ftl.o 00:03:38.063 LINK mkfs 00:03:38.063 LINK simple_copy 00:03:38.323 CC test/nvme/connect_stress/connect_stress.o 00:03:38.323 CXX test/cpp_headers/gpt_spec.o 00:03:38.323 CC examples/nvme/arbitration/arbitration.o 00:03:38.323 CC examples/nvme/hotplug/hotplug.o 00:03:38.323 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:38.323 CXX test/cpp_headers/hexlify.o 00:03:38.323 CC examples/nvme/abort/abort.o 00:03:38.323 LINK connect_stress 00:03:38.323 LINK nvme_manage 00:03:38.323 CXX test/cpp_headers/histogram_data.o 00:03:38.585 LINK bdevperf 00:03:38.585 LINK arbitration 00:03:38.585 LINK hotplug 00:03:38.585 LINK bdevio 00:03:38.585 LINK cmb_copy 00:03:38.585 CXX test/cpp_headers/idxd.o 00:03:38.585 CXX test/cpp_headers/idxd_spec.o 00:03:38.585 CC test/nvme/boot_partition/boot_partition.o 00:03:38.585 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:38.585 CXX test/cpp_headers/init.o 00:03:38.585 CXX test/cpp_headers/ioat.o 00:03:38.585 LINK abort 00:03:38.585 CXX test/cpp_headers/ioat_spec.o 00:03:38.585 CC test/nvme/compliance/nvme_compliance.o 00:03:38.585 CC test/nvme/fused_ordering/fused_ordering.o 00:03:38.585 CXX test/cpp_headers/iscsi_spec.o 00:03:38.585 LINK boot_partition 00:03:38.585 CXX test/cpp_headers/json.o 00:03:38.845 LINK pmr_persistence 00:03:38.845 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:38.845 CXX test/cpp_headers/jsonrpc.o 00:03:38.845 CC test/nvme/fdp/fdp.o 00:03:38.845 CXX test/cpp_headers/keyring.o 00:03:38.845 CXX test/cpp_headers/keyring_module.o 00:03:38.845 CC test/nvme/cuse/cuse.o 00:03:38.845 LINK fused_ordering 00:03:38.845 CXX test/cpp_headers/likely.o 00:03:38.845 CXX test/cpp_headers/log.o 00:03:38.845 LINK doorbell_aers 00:03:38.845 CXX test/cpp_headers/lvol.o 00:03:39.106 CXX test/cpp_headers/md5.o 00:03:39.106 CC examples/nvmf/nvmf/nvmf.o 00:03:39.106 LINK nvme_compliance 00:03:39.106 CXX test/cpp_headers/memory.o 00:03:39.106 CXX test/cpp_headers/mmio.o 00:03:39.106 CXX test/cpp_headers/nbd.o 00:03:39.106 CXX test/cpp_headers/net.o 00:03:39.106 CXX test/cpp_headers/notify.o 00:03:39.106 CXX test/cpp_headers/nvme.o 00:03:39.106 LINK fdp 00:03:39.106 CXX test/cpp_headers/nvme_intel.o 00:03:39.106 CXX test/cpp_headers/nvme_ocssd.o 00:03:39.106 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:39.106 CXX test/cpp_headers/nvme_spec.o 00:03:39.106 CXX test/cpp_headers/nvme_zns.o 00:03:39.367 CXX test/cpp_headers/nvmf_cmd.o 00:03:39.367 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:39.367 CXX test/cpp_headers/nvmf.o 00:03:39.367 LINK nvmf 00:03:39.367 CXX test/cpp_headers/nvmf_spec.o 00:03:39.367 CXX test/cpp_headers/nvmf_transport.o 00:03:39.367 CXX test/cpp_headers/opal.o 00:03:39.367 CXX test/cpp_headers/opal_spec.o 00:03:39.367 CXX test/cpp_headers/pci_ids.o 00:03:39.367 CXX test/cpp_headers/pipe.o 00:03:39.367 CXX test/cpp_headers/queue.o 00:03:39.367 CXX test/cpp_headers/reduce.o 00:03:39.367 CXX test/cpp_headers/rpc.o 00:03:39.367 CXX test/cpp_headers/scheduler.o 00:03:39.367 CXX test/cpp_headers/scsi.o 00:03:39.367 CXX test/cpp_headers/scsi_spec.o 00:03:39.367 CXX test/cpp_headers/sock.o 00:03:39.629 CXX test/cpp_headers/stdinc.o 00:03:39.629 CXX test/cpp_headers/string.o 00:03:39.629 CXX test/cpp_headers/thread.o 00:03:39.629 CXX test/cpp_headers/trace.o 00:03:39.629 CXX test/cpp_headers/trace_parser.o 00:03:39.629 CXX test/cpp_headers/tree.o 00:03:39.629 CXX test/cpp_headers/ublk.o 00:03:39.629 CXX test/cpp_headers/util.o 00:03:39.629 CXX test/cpp_headers/uuid.o 00:03:39.629 CXX test/cpp_headers/version.o 00:03:39.629 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.629 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.629 CXX test/cpp_headers/vhost.o 00:03:39.629 CXX test/cpp_headers/vmd.o 00:03:39.629 CXX test/cpp_headers/xor.o 00:03:39.629 CXX test/cpp_headers/zipf.o 00:03:39.890 LINK cuse 00:03:43.196 LINK esnap 00:03:43.196 00:03:43.196 real 1m7.064s 00:03:43.196 user 6m0.644s 00:03:43.196 sys 1m7.534s 00:03:43.196 12:29:42 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:43.196 12:29:42 make -- common/autotest_common.sh@10 -- $ set +x 00:03:43.196 ************************************ 00:03:43.196 END TEST make 00:03:43.196 ************************************ 00:03:43.196 12:29:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:43.196 12:29:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:43.196 12:29:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:43.196 12:29:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.196 12:29:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:43.196 12:29:42 -- pm/common@44 -- $ pid=5066 00:03:43.196 12:29:42 -- pm/common@50 -- $ kill -TERM 5066 00:03:43.196 12:29:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.196 12:29:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:43.196 12:29:42 -- pm/common@44 -- $ pid=5068 00:03:43.196 12:29:42 -- pm/common@50 -- $ kill -TERM 5068 00:03:43.196 12:29:42 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:43.196 12:29:42 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:43.196 12:29:42 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:43.196 12:29:42 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:43.196 12:29:42 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:43.458 12:29:42 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:43.458 12:29:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.458 12:29:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.458 12:29:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.458 12:29:42 -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.458 12:29:42 -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.458 12:29:42 -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.458 12:29:42 -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.458 12:29:42 -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.458 12:29:42 -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.458 12:29:42 -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.458 12:29:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.458 12:29:42 -- scripts/common.sh@344 -- # case "$op" in 00:03:43.458 12:29:42 -- scripts/common.sh@345 -- # : 1 00:03:43.458 12:29:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.458 12:29:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.458 12:29:42 -- scripts/common.sh@365 -- # decimal 1 00:03:43.458 12:29:42 -- scripts/common.sh@353 -- # local d=1 00:03:43.458 12:29:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.458 12:29:42 -- scripts/common.sh@355 -- # echo 1 00:03:43.458 12:29:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.458 12:29:42 -- scripts/common.sh@366 -- # decimal 2 00:03:43.458 12:29:42 -- scripts/common.sh@353 -- # local d=2 00:03:43.458 12:29:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.458 12:29:42 -- scripts/common.sh@355 -- # echo 2 00:03:43.458 12:29:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.458 12:29:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.458 12:29:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.458 12:29:42 -- scripts/common.sh@368 -- # return 0 00:03:43.458 12:29:42 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.458 12:29:42 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:43.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.458 --rc genhtml_branch_coverage=1 00:03:43.458 --rc genhtml_function_coverage=1 00:03:43.458 --rc genhtml_legend=1 00:03:43.458 --rc geninfo_all_blocks=1 00:03:43.458 --rc geninfo_unexecuted_blocks=1 00:03:43.458 00:03:43.458 ' 00:03:43.458 12:29:42 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:43.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.458 --rc genhtml_branch_coverage=1 00:03:43.458 --rc genhtml_function_coverage=1 00:03:43.458 --rc genhtml_legend=1 00:03:43.458 --rc geninfo_all_blocks=1 00:03:43.458 --rc geninfo_unexecuted_blocks=1 00:03:43.458 00:03:43.458 ' 00:03:43.458 12:29:42 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:43.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.458 --rc genhtml_branch_coverage=1 00:03:43.458 --rc genhtml_function_coverage=1 00:03:43.458 --rc genhtml_legend=1 00:03:43.458 --rc geninfo_all_blocks=1 00:03:43.458 --rc geninfo_unexecuted_blocks=1 00:03:43.458 00:03:43.458 ' 00:03:43.458 12:29:42 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:43.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.458 --rc genhtml_branch_coverage=1 00:03:43.458 --rc genhtml_function_coverage=1 00:03:43.458 --rc genhtml_legend=1 00:03:43.458 --rc geninfo_all_blocks=1 00:03:43.458 --rc geninfo_unexecuted_blocks=1 00:03:43.458 00:03:43.458 ' 00:03:43.458 12:29:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:43.458 12:29:42 -- nvmf/common.sh@7 -- # uname -s 00:03:43.458 12:29:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.458 12:29:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.458 12:29:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.458 12:29:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.458 12:29:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.458 12:29:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.458 12:29:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.458 12:29:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.458 12:29:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:43.458 12:29:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:43.458 12:29:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78be8a9e-58b2-4e5c-9711-0955207b4fd9 00:03:43.458 12:29:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=78be8a9e-58b2-4e5c-9711-0955207b4fd9 00:03:43.458 12:29:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:43.458 12:29:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:43.458 12:29:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:43.458 12:29:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:43.458 12:29:43 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:43.458 12:29:43 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:43.458 12:29:43 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:43.458 12:29:43 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:43.459 12:29:43 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:43.459 12:29:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.459 12:29:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.459 12:29:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.459 12:29:43 -- paths/export.sh@5 -- # export PATH 00:03:43.459 12:29:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.459 12:29:43 -- nvmf/common.sh@51 -- # : 0 00:03:43.459 12:29:43 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:43.459 12:29:43 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:43.459 12:29:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:43.459 12:29:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:43.459 12:29:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:43.459 12:29:43 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:43.459 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:43.459 12:29:43 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:43.459 12:29:43 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:43.459 12:29:43 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:43.459 12:29:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:43.459 12:29:43 -- spdk/autotest.sh@32 -- # uname -s 00:03:43.459 12:29:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:43.459 12:29:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:43.459 12:29:43 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:43.459 12:29:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:43.459 12:29:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:43.459 12:29:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:43.459 12:29:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:43.459 12:29:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:43.459 12:29:43 -- spdk/autotest.sh@48 -- # udevadm_pid=56020 00:03:43.459 12:29:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:43.459 12:29:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:43.459 12:29:43 -- pm/common@17 -- # local monitor 00:03:43.459 12:29:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.459 12:29:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.459 12:29:43 -- pm/common@25 -- # sleep 1 00:03:43.459 12:29:43 -- pm/common@21 -- # date +%s 00:03:43.459 12:29:43 -- pm/common@21 -- # date +%s 00:03:43.459 12:29:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734179383 00:03:43.459 12:29:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734179383 00:03:43.459 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734179383_collect-cpu-load.pm.log 00:03:43.459 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734179383_collect-vmstat.pm.log 00:03:44.401 12:29:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:44.401 12:29:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:44.401 12:29:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:44.401 12:29:44 -- common/autotest_common.sh@10 -- # set +x 00:03:44.401 12:29:44 -- spdk/autotest.sh@59 -- # create_test_list 00:03:44.401 12:29:44 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:44.401 12:29:44 -- common/autotest_common.sh@10 -- # set +x 00:03:44.401 12:29:44 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:44.401 12:29:44 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:44.402 12:29:44 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:44.402 12:29:44 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:44.402 12:29:44 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:44.402 12:29:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:44.402 12:29:44 -- common/autotest_common.sh@1457 -- # uname 00:03:44.402 12:29:44 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:44.402 12:29:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:44.402 12:29:44 -- common/autotest_common.sh@1477 -- # uname 00:03:44.663 12:29:44 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:44.663 12:29:44 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:44.663 12:29:44 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:44.663 lcov: LCOV version 1.15 00:03:44.663 12:29:44 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:59.576 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:59.576 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:14.478 12:30:14 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:14.478 12:30:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.478 12:30:14 -- common/autotest_common.sh@10 -- # set +x 00:04:14.478 12:30:14 -- spdk/autotest.sh@78 -- # rm -f 00:04:14.478 12:30:14 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.044 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.610 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:15.610 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:15.610 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:15.610 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:15.610 12:30:15 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:15.610 12:30:15 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:15.610 12:30:15 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:15.610 12:30:15 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:15.610 12:30:15 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:15.610 12:30:15 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:15.610 12:30:15 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:15.610 12:30:15 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:15.610 12:30:15 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:15.610 12:30:15 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:15.610 12:30:15 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:15.610 12:30:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.610 12:30:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.610 12:30:15 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:15.610 12:30:15 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:15.610 12:30:15 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:15.610 12:30:15 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:15.610 12:30:15 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:15.610 12:30:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:15.610 12:30:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.610 12:30:15 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:15.610 12:30:15 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:04:15.610 12:30:15 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:15.610 12:30:15 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1 00:04:15.610 12:30:15 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:04:15.610 12:30:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:04:15.610 12:30:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.610 12:30:15 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:15.610 12:30:15 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:04:15.610 12:30:15 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:15.610 12:30:15 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:04:15.610 12:30:15 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:15.610 12:30:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:15.610 12:30:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.610 12:30:15 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:15.610 12:30:15 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n2 00:04:15.610 12:30:15 -- common/autotest_common.sh@1650 -- # local device=nvme3n2 00:04:15.610 12:30:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:04:15.610 12:30:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.610 12:30:15 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:15.610 12:30:15 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n3 00:04:15.610 12:30:15 -- common/autotest_common.sh@1650 -- # local device=nvme3n3 00:04:15.610 12:30:15 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:04:15.610 12:30:15 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:15.610 12:30:15 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:15.610 12:30:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.610 12:30:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.610 12:30:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:15.610 12:30:15 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:15.610 12:30:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:15.610 No valid GPT data, bailing 00:04:15.610 12:30:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.610 12:30:15 -- scripts/common.sh@394 -- # pt= 00:04:15.610 12:30:15 -- scripts/common.sh@395 -- # return 1 00:04:15.610 12:30:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:15.610 1+0 records in 00:04:15.610 1+0 records out 00:04:15.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0211309 s, 49.6 MB/s 00:04:15.610 12:30:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.610 12:30:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.610 12:30:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:15.610 12:30:15 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:15.610 12:30:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:15.610 No valid GPT data, bailing 00:04:15.610 12:30:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:15.610 12:30:15 -- scripts/common.sh@394 -- # pt= 00:04:15.610 12:30:15 -- scripts/common.sh@395 -- # return 1 00:04:15.610 12:30:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:15.610 1+0 records in 00:04:15.610 1+0 records out 00:04:15.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00492819 s, 213 MB/s 00:04:15.610 12:30:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.611 12:30:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.611 12:30:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:15.611 12:30:15 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:15.611 12:30:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:15.611 No valid GPT data, bailing 00:04:15.611 12:30:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:15.869 12:30:15 -- scripts/common.sh@394 -- # pt= 00:04:15.869 12:30:15 -- scripts/common.sh@395 -- # return 1 00:04:15.869 12:30:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:15.869 1+0 records in 00:04:15.869 1+0 records out 00:04:15.869 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474732 s, 221 MB/s 00:04:15.869 12:30:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.869 12:30:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.869 12:30:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:15.869 12:30:15 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:15.869 12:30:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:15.869 No valid GPT data, bailing 00:04:15.869 12:30:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:15.869 12:30:15 -- scripts/common.sh@394 -- # pt= 00:04:15.869 12:30:15 -- scripts/common.sh@395 -- # return 1 00:04:15.869 12:30:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:15.869 1+0 records in 00:04:15.869 1+0 records out 00:04:15.869 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00450841 s, 233 MB/s 00:04:15.869 12:30:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.869 12:30:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.869 12:30:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:04:15.869 12:30:15 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:04:15.869 12:30:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:04:15.869 No valid GPT data, bailing 00:04:15.869 12:30:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:04:15.869 12:30:15 -- scripts/common.sh@394 -- # pt= 00:04:15.869 12:30:15 -- scripts/common.sh@395 -- # return 1 00:04:15.869 12:30:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:04:15.869 1+0 records in 00:04:15.869 1+0 records out 00:04:15.869 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00353327 s, 297 MB/s 00:04:15.869 12:30:15 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:15.869 12:30:15 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:15.869 12:30:15 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:04:15.869 12:30:15 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:04:15.869 12:30:15 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:04:15.869 No valid GPT data, bailing 00:04:15.869 12:30:15 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:04:15.869 12:30:15 -- scripts/common.sh@394 -- # pt= 00:04:15.869 12:30:15 -- scripts/common.sh@395 -- # return 1 00:04:15.869 12:30:15 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:04:15.869 1+0 records in 00:04:15.869 1+0 records out 00:04:15.869 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047506 s, 221 MB/s 00:04:15.869 12:30:15 -- spdk/autotest.sh@105 -- # sync 00:04:16.127 12:30:15 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:16.127 12:30:15 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:16.127 12:30:15 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:17.501 12:30:17 -- spdk/autotest.sh@111 -- # uname -s 00:04:17.501 12:30:17 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:17.501 12:30:17 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:17.501 12:30:17 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:18.068 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:18.329 Hugepages 00:04:18.329 node hugesize free / total 00:04:18.329 node0 1048576kB 0 / 0 00:04:18.329 node0 2048kB 0 / 0 00:04:18.329 00:04:18.329 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:18.589 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:18.589 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:18.589 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:18.589 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:04:18.850 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:18.850 12:30:18 -- spdk/autotest.sh@117 -- # uname -s 00:04:18.850 12:30:18 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:18.850 12:30:18 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:18.850 12:30:18 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:19.109 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.676 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.676 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.676 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.934 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.934 12:30:19 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:20.870 12:30:20 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:20.870 12:30:20 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:20.870 12:30:20 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:20.870 12:30:20 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:20.870 12:30:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:20.870 12:30:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:20.870 12:30:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.870 12:30:20 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:20.870 12:30:20 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:20.870 12:30:20 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:20.870 12:30:20 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:20.870 12:30:20 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:21.128 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.386 Waiting for block devices as requested 00:04:21.386 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.386 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.644 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:21.644 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.909 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:26.909 12:30:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:26.909 12:30:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:26.909 12:30:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:26.909 12:30:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:26.909 12:30:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:26.909 12:30:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:26.909 12:30:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:26.909 12:30:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:26.909 12:30:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:26.909 12:30:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:26.909 12:30:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:26.909 12:30:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1543 -- # continue 00:04:26.909 12:30:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:26.909 12:30:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:26.909 12:30:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:26.909 12:30:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:26.909 12:30:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:26.909 12:30:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:26.909 12:30:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:26.909 12:30:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:26.909 12:30:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:26.909 12:30:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:26.909 12:30:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:26.909 12:30:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1543 -- # continue 00:04:26.909 12:30:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:26.909 12:30:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:26.909 12:30:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:26.909 12:30:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:26.909 12:30:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:26.909 12:30:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:26.909 12:30:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:26.909 12:30:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:26.909 12:30:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:26.909 12:30:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:26.909 12:30:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:26.909 12:30:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1543 -- # continue 00:04:26.909 12:30:26 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:26.909 12:30:26 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:26.909 12:30:26 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:26.909 12:30:26 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:26.909 12:30:26 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:26.909 12:30:26 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:26.909 12:30:26 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:26.909 12:30:26 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:26.909 12:30:26 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:26.909 12:30:26 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:26.909 12:30:26 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:26.909 12:30:26 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:26.909 12:30:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:26.909 12:30:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:26.909 12:30:26 -- common/autotest_common.sh@1543 -- # continue 00:04:26.909 12:30:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:26.909 12:30:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.909 12:30:26 -- common/autotest_common.sh@10 -- # set +x 00:04:26.909 12:30:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:26.909 12:30:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.909 12:30:26 -- common/autotest_common.sh@10 -- # set +x 00:04:26.909 12:30:26 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.169 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.740 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.740 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.740 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.740 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.740 12:30:27 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:27.740 12:30:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:27.740 12:30:27 -- common/autotest_common.sh@10 -- # set +x 00:04:28.000 12:30:27 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:28.000 12:30:27 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:28.000 12:30:27 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:28.000 12:30:27 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:28.000 12:30:27 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:28.000 12:30:27 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:28.000 12:30:27 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:28.000 12:30:27 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:28.000 12:30:27 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:28.000 12:30:27 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:28.000 12:30:27 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.000 12:30:27 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:28.000 12:30:27 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:28.000 12:30:27 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:28.000 12:30:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:28.000 12:30:27 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:28.000 12:30:27 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:28.000 12:30:27 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:28.000 12:30:27 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:28.000 12:30:27 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:28.000 12:30:27 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:28.000 12:30:27 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:28.000 12:30:27 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:28.000 12:30:27 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:28.000 12:30:27 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:28.000 12:30:27 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:28.000 12:30:27 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:28.000 12:30:27 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:28.000 12:30:27 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:28.000 12:30:27 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:28.000 12:30:27 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:28.000 12:30:27 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:28.000 12:30:27 -- common/autotest_common.sh@1572 -- # return 0 00:04:28.000 12:30:27 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:28.000 12:30:27 -- common/autotest_common.sh@1580 -- # return 0 00:04:28.000 12:30:27 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:28.000 12:30:27 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:28.000 12:30:27 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:28.000 12:30:27 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:28.000 12:30:27 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:28.000 12:30:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:28.000 12:30:27 -- common/autotest_common.sh@10 -- # set +x 00:04:28.000 12:30:27 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:28.000 12:30:27 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:28.000 12:30:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.000 12:30:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.000 12:30:27 -- common/autotest_common.sh@10 -- # set +x 00:04:28.000 ************************************ 00:04:28.000 START TEST env 00:04:28.000 ************************************ 00:04:28.000 12:30:27 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:28.000 * Looking for test storage... 00:04:28.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:28.000 12:30:27 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:28.000 12:30:27 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:28.000 12:30:27 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:28.000 12:30:27 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:28.000 12:30:27 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.001 12:30:27 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.001 12:30:27 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.001 12:30:27 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.001 12:30:27 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.001 12:30:27 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.001 12:30:27 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.001 12:30:27 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.001 12:30:27 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.001 12:30:27 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.001 12:30:27 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.001 12:30:27 env -- scripts/common.sh@344 -- # case "$op" in 00:04:28.001 12:30:27 env -- scripts/common.sh@345 -- # : 1 00:04:28.001 12:30:27 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.001 12:30:27 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.001 12:30:27 env -- scripts/common.sh@365 -- # decimal 1 00:04:28.001 12:30:27 env -- scripts/common.sh@353 -- # local d=1 00:04:28.001 12:30:27 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.001 12:30:27 env -- scripts/common.sh@355 -- # echo 1 00:04:28.001 12:30:27 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.001 12:30:27 env -- scripts/common.sh@366 -- # decimal 2 00:04:28.001 12:30:27 env -- scripts/common.sh@353 -- # local d=2 00:04:28.001 12:30:27 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.001 12:30:27 env -- scripts/common.sh@355 -- # echo 2 00:04:28.001 12:30:27 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.001 12:30:27 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.001 12:30:27 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.001 12:30:27 env -- scripts/common.sh@368 -- # return 0 00:04:28.001 12:30:27 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.001 12:30:27 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:28.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.001 --rc genhtml_branch_coverage=1 00:04:28.001 --rc genhtml_function_coverage=1 00:04:28.001 --rc genhtml_legend=1 00:04:28.001 --rc geninfo_all_blocks=1 00:04:28.001 --rc geninfo_unexecuted_blocks=1 00:04:28.001 00:04:28.001 ' 00:04:28.001 12:30:27 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:28.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.001 --rc genhtml_branch_coverage=1 00:04:28.001 --rc genhtml_function_coverage=1 00:04:28.001 --rc genhtml_legend=1 00:04:28.001 --rc geninfo_all_blocks=1 00:04:28.001 --rc geninfo_unexecuted_blocks=1 00:04:28.001 00:04:28.001 ' 00:04:28.001 12:30:27 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:28.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.001 --rc genhtml_branch_coverage=1 00:04:28.001 --rc genhtml_function_coverage=1 00:04:28.001 --rc genhtml_legend=1 00:04:28.001 --rc geninfo_all_blocks=1 00:04:28.001 --rc geninfo_unexecuted_blocks=1 00:04:28.001 00:04:28.001 ' 00:04:28.001 12:30:27 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:28.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.001 --rc genhtml_branch_coverage=1 00:04:28.001 --rc genhtml_function_coverage=1 00:04:28.001 --rc genhtml_legend=1 00:04:28.001 --rc geninfo_all_blocks=1 00:04:28.001 --rc geninfo_unexecuted_blocks=1 00:04:28.001 00:04:28.001 ' 00:04:28.001 12:30:27 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:28.001 12:30:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.001 12:30:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.001 12:30:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.261 ************************************ 00:04:28.261 START TEST env_memory 00:04:28.261 ************************************ 00:04:28.261 12:30:27 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:28.261 00:04:28.261 00:04:28.261 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.261 http://cunit.sourceforge.net/ 00:04:28.261 00:04:28.261 00:04:28.261 Suite: memory 00:04:28.261 Test: alloc and free memory map ...[2024-12-14 12:30:27.792990] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:28.261 passed 00:04:28.261 Test: mem map translation ...[2024-12-14 12:30:27.831556] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:28.261 [2024-12-14 12:30:27.831605] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:28.261 [2024-12-14 12:30:27.831659] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:28.261 [2024-12-14 12:30:27.831674] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:28.261 passed 00:04:28.261 Test: mem map registration ...[2024-12-14 12:30:27.899648] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:28.261 [2024-12-14 12:30:27.899688] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:28.261 passed 00:04:28.261 Test: mem map adjacent registrations ...passed 00:04:28.261 00:04:28.261 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.261 suites 1 1 n/a 0 0 00:04:28.261 tests 4 4 4 0 0 00:04:28.261 asserts 152 152 152 0 n/a 00:04:28.261 00:04:28.261 Elapsed time = 0.232 seconds 00:04:28.521 00:04:28.521 real 0m0.266s 00:04:28.521 user 0m0.240s 00:04:28.521 sys 0m0.020s 00:04:28.521 12:30:28 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.521 ************************************ 00:04:28.521 END TEST env_memory 00:04:28.521 ************************************ 00:04:28.521 12:30:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:28.521 12:30:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:28.521 12:30:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.521 12:30:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.521 12:30:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.521 ************************************ 00:04:28.521 START TEST env_vtophys 00:04:28.521 ************************************ 00:04:28.521 12:30:28 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:28.521 EAL: lib.eal log level changed from notice to debug 00:04:28.521 EAL: Detected lcore 0 as core 0 on socket 0 00:04:28.521 EAL: Detected lcore 1 as core 0 on socket 0 00:04:28.521 EAL: Detected lcore 2 as core 0 on socket 0 00:04:28.521 EAL: Detected lcore 3 as core 0 on socket 0 00:04:28.521 EAL: Detected lcore 4 as core 0 on socket 0 00:04:28.521 EAL: Detected lcore 5 as core 0 on socket 0 00:04:28.521 EAL: Detected lcore 6 as core 0 on socket 0 00:04:28.521 EAL: Detected lcore 7 as core 0 on socket 0 00:04:28.521 EAL: Detected lcore 8 as core 0 on socket 0 00:04:28.521 EAL: Detected lcore 9 as core 0 on socket 0 00:04:28.521 EAL: Maximum logical cores by configuration: 128 00:04:28.521 EAL: Detected CPU lcores: 10 00:04:28.521 EAL: Detected NUMA nodes: 1 00:04:28.521 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:28.521 EAL: Detected shared linkage of DPDK 00:04:28.521 EAL: No shared files mode enabled, IPC will be disabled 00:04:28.521 EAL: Selected IOVA mode 'PA' 00:04:28.521 EAL: Probing VFIO support... 00:04:28.521 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:28.521 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:28.521 EAL: Ask a virtual area of 0x2e000 bytes 00:04:28.521 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:28.521 EAL: Setting up physically contiguous memory... 00:04:28.521 EAL: Setting maximum number of open files to 524288 00:04:28.521 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:28.521 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:28.521 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.521 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:28.521 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.521 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.521 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:28.521 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:28.521 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.521 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:28.521 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.521 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.521 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:28.521 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:28.521 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.521 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:28.521 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.521 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.521 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:28.521 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:28.521 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.521 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:28.521 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.521 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.521 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:28.521 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:28.521 EAL: Hugepages will be freed exactly as allocated. 00:04:28.521 EAL: No shared files mode enabled, IPC is disabled 00:04:28.521 EAL: No shared files mode enabled, IPC is disabled 00:04:28.521 EAL: TSC frequency is ~2600000 KHz 00:04:28.521 EAL: Main lcore 0 is ready (tid=7f8a2fcd2a40;cpuset=[0]) 00:04:28.521 EAL: Trying to obtain current memory policy. 00:04:28.521 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.521 EAL: Restoring previous memory policy: 0 00:04:28.521 EAL: request: mp_malloc_sync 00:04:28.521 EAL: No shared files mode enabled, IPC is disabled 00:04:28.521 EAL: Heap on socket 0 was expanded by 2MB 00:04:28.521 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:28.521 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:28.521 EAL: Mem event callback 'spdk:(nil)' registered 00:04:28.521 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:28.521 00:04:28.521 00:04:28.521 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.521 http://cunit.sourceforge.net/ 00:04:28.521 00:04:28.521 00:04:28.521 Suite: components_suite 00:04:29.091 Test: vtophys_malloc_test ...passed 00:04:29.091 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:29.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.091 EAL: Restoring previous memory policy: 4 00:04:29.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.091 EAL: request: mp_malloc_sync 00:04:29.091 EAL: No shared files mode enabled, IPC is disabled 00:04:29.091 EAL: Heap on socket 0 was expanded by 4MB 00:04:29.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.091 EAL: request: mp_malloc_sync 00:04:29.091 EAL: No shared files mode enabled, IPC is disabled 00:04:29.091 EAL: Heap on socket 0 was shrunk by 4MB 00:04:29.091 EAL: Trying to obtain current memory policy. 00:04:29.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.091 EAL: Restoring previous memory policy: 4 00:04:29.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.091 EAL: request: mp_malloc_sync 00:04:29.091 EAL: No shared files mode enabled, IPC is disabled 00:04:29.091 EAL: Heap on socket 0 was expanded by 6MB 00:04:29.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.091 EAL: request: mp_malloc_sync 00:04:29.091 EAL: No shared files mode enabled, IPC is disabled 00:04:29.091 EAL: Heap on socket 0 was shrunk by 6MB 00:04:29.091 EAL: Trying to obtain current memory policy. 00:04:29.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.091 EAL: Restoring previous memory policy: 4 00:04:29.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.091 EAL: request: mp_malloc_sync 00:04:29.091 EAL: No shared files mode enabled, IPC is disabled 00:04:29.091 EAL: Heap on socket 0 was expanded by 10MB 00:04:29.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.091 EAL: request: mp_malloc_sync 00:04:29.091 EAL: No shared files mode enabled, IPC is disabled 00:04:29.091 EAL: Heap on socket 0 was shrunk by 10MB 00:04:29.091 EAL: Trying to obtain current memory policy. 00:04:29.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.091 EAL: Restoring previous memory policy: 4 00:04:29.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.091 EAL: request: mp_malloc_sync 00:04:29.091 EAL: No shared files mode enabled, IPC is disabled 00:04:29.091 EAL: Heap on socket 0 was expanded by 18MB 00:04:29.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.091 EAL: request: mp_malloc_sync 00:04:29.091 EAL: No shared files mode enabled, IPC is disabled 00:04:29.091 EAL: Heap on socket 0 was shrunk by 18MB 00:04:29.091 EAL: Trying to obtain current memory policy. 00:04:29.091 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.091 EAL: Restoring previous memory policy: 4 00:04:29.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.091 EAL: request: mp_malloc_sync 00:04:29.091 EAL: No shared files mode enabled, IPC is disabled 00:04:29.091 EAL: Heap on socket 0 was expanded by 34MB 00:04:29.091 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.091 EAL: request: mp_malloc_sync 00:04:29.091 EAL: No shared files mode enabled, IPC is disabled 00:04:29.091 EAL: Heap on socket 0 was shrunk by 34MB 00:04:29.091 EAL: Trying to obtain current memory policy. 00:04:29.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.092 EAL: Restoring previous memory policy: 4 00:04:29.092 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.092 EAL: request: mp_malloc_sync 00:04:29.092 EAL: No shared files mode enabled, IPC is disabled 00:04:29.092 EAL: Heap on socket 0 was expanded by 66MB 00:04:29.092 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.092 EAL: request: mp_malloc_sync 00:04:29.092 EAL: No shared files mode enabled, IPC is disabled 00:04:29.092 EAL: Heap on socket 0 was shrunk by 66MB 00:04:29.092 EAL: Trying to obtain current memory policy. 00:04:29.092 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.350 EAL: Restoring previous memory policy: 4 00:04:29.350 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.350 EAL: request: mp_malloc_sync 00:04:29.350 EAL: No shared files mode enabled, IPC is disabled 00:04:29.351 EAL: Heap on socket 0 was expanded by 130MB 00:04:29.351 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.351 EAL: request: mp_malloc_sync 00:04:29.351 EAL: No shared files mode enabled, IPC is disabled 00:04:29.351 EAL: Heap on socket 0 was shrunk by 130MB 00:04:29.611 EAL: Trying to obtain current memory policy. 00:04:29.611 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.611 EAL: Restoring previous memory policy: 4 00:04:29.611 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.611 EAL: request: mp_malloc_sync 00:04:29.611 EAL: No shared files mode enabled, IPC is disabled 00:04:29.611 EAL: Heap on socket 0 was expanded by 258MB 00:04:29.883 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.883 EAL: request: mp_malloc_sync 00:04:29.883 EAL: No shared files mode enabled, IPC is disabled 00:04:29.883 EAL: Heap on socket 0 was shrunk by 258MB 00:04:30.142 EAL: Trying to obtain current memory policy. 00:04:30.142 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.142 EAL: Restoring previous memory policy: 4 00:04:30.142 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.142 EAL: request: mp_malloc_sync 00:04:30.142 EAL: No shared files mode enabled, IPC is disabled 00:04:30.142 EAL: Heap on socket 0 was expanded by 514MB 00:04:30.713 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.713 EAL: request: mp_malloc_sync 00:04:30.713 EAL: No shared files mode enabled, IPC is disabled 00:04:30.713 EAL: Heap on socket 0 was shrunk by 514MB 00:04:31.281 EAL: Trying to obtain current memory policy. 00:04:31.281 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.541 EAL: Restoring previous memory policy: 4 00:04:31.541 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.541 EAL: request: mp_malloc_sync 00:04:31.541 EAL: No shared files mode enabled, IPC is disabled 00:04:31.541 EAL: Heap on socket 0 was expanded by 1026MB 00:04:32.485 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.485 EAL: request: mp_malloc_sync 00:04:32.485 EAL: No shared files mode enabled, IPC is disabled 00:04:32.485 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:33.427 passed 00:04:33.427 00:04:33.427 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.427 suites 1 1 n/a 0 0 00:04:33.427 tests 2 2 2 0 0 00:04:33.427 asserts 5838 5838 5838 0 n/a 00:04:33.427 00:04:33.427 Elapsed time = 4.623 seconds 00:04:33.427 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.427 EAL: request: mp_malloc_sync 00:04:33.427 EAL: No shared files mode enabled, IPC is disabled 00:04:33.427 EAL: Heap on socket 0 was shrunk by 2MB 00:04:33.427 EAL: No shared files mode enabled, IPC is disabled 00:04:33.427 EAL: No shared files mode enabled, IPC is disabled 00:04:33.427 EAL: No shared files mode enabled, IPC is disabled 00:04:33.427 00:04:33.427 real 0m4.894s 00:04:33.427 user 0m4.130s 00:04:33.427 sys 0m0.613s 00:04:33.427 12:30:32 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.427 ************************************ 00:04:33.427 END TEST env_vtophys 00:04:33.427 ************************************ 00:04:33.427 12:30:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:33.427 12:30:32 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:33.427 12:30:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.427 12:30:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.427 12:30:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.427 ************************************ 00:04:33.427 START TEST env_pci 00:04:33.427 ************************************ 00:04:33.427 12:30:32 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:33.427 00:04:33.427 00:04:33.427 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.427 http://cunit.sourceforge.net/ 00:04:33.427 00:04:33.427 00:04:33.427 Suite: pci 00:04:33.427 Test: pci_hook ...[2024-12-14 12:30:33.010840] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58779 has claimed it 00:04:33.427 passed 00:04:33.428 00:04:33.428 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.428 suites 1 1 n/a 0 0 00:04:33.428 tests 1 1 1 0 0 00:04:33.428 asserts 25 25 25 0 n/a 00:04:33.428 00:04:33.428 Elapsed time = 0.007 seconds 00:04:33.428 EAL: Cannot find device (10000:00:01.0) 00:04:33.428 EAL: Failed to attach device on primary process 00:04:33.428 ************************************ 00:04:33.428 END TEST env_pci 00:04:33.428 ************************************ 00:04:33.428 00:04:33.428 real 0m0.073s 00:04:33.428 user 0m0.029s 00:04:33.428 sys 0m0.043s 00:04:33.428 12:30:33 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.428 12:30:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:33.428 12:30:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:33.428 12:30:33 env -- env/env.sh@15 -- # uname 00:04:33.428 12:30:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:33.428 12:30:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:33.428 12:30:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.428 12:30:33 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:33.428 12:30:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.428 12:30:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.428 ************************************ 00:04:33.428 START TEST env_dpdk_post_init 00:04:33.428 ************************************ 00:04:33.428 12:30:33 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:33.428 EAL: Detected CPU lcores: 10 00:04:33.428 EAL: Detected NUMA nodes: 1 00:04:33.428 EAL: Detected shared linkage of DPDK 00:04:33.428 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.428 EAL: Selected IOVA mode 'PA' 00:04:33.688 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.688 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:33.688 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:33.688 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:33.688 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:33.688 Starting DPDK initialization... 00:04:33.688 Starting SPDK post initialization... 00:04:33.688 SPDK NVMe probe 00:04:33.688 Attaching to 0000:00:10.0 00:04:33.688 Attaching to 0000:00:11.0 00:04:33.688 Attaching to 0000:00:12.0 00:04:33.688 Attaching to 0000:00:13.0 00:04:33.688 Attached to 0000:00:10.0 00:04:33.688 Attached to 0000:00:11.0 00:04:33.688 Attached to 0000:00:13.0 00:04:33.688 Attached to 0000:00:12.0 00:04:33.688 Cleaning up... 00:04:33.688 ************************************ 00:04:33.688 END TEST env_dpdk_post_init 00:04:33.688 ************************************ 00:04:33.688 00:04:33.688 real 0m0.234s 00:04:33.688 user 0m0.068s 00:04:33.688 sys 0m0.069s 00:04:33.688 12:30:33 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.688 12:30:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:33.688 12:30:33 env -- env/env.sh@26 -- # uname 00:04:33.688 12:30:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:33.688 12:30:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.688 12:30:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.688 12:30:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.688 12:30:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.688 ************************************ 00:04:33.688 START TEST env_mem_callbacks 00:04:33.688 ************************************ 00:04:33.688 12:30:33 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.688 EAL: Detected CPU lcores: 10 00:04:33.688 EAL: Detected NUMA nodes: 1 00:04:33.688 EAL: Detected shared linkage of DPDK 00:04:33.949 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.949 EAL: Selected IOVA mode 'PA' 00:04:33.949 00:04:33.949 00:04:33.949 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.949 http://cunit.sourceforge.net/ 00:04:33.949 00:04:33.949 00:04:33.949 Suite: memory 00:04:33.949 Test: test ... 00:04:33.949 register 0x200000200000 2097152 00:04:33.949 malloc 3145728 00:04:33.949 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.949 register 0x200000400000 4194304 00:04:33.949 buf 0x2000004fffc0 len 3145728 PASSED 00:04:33.949 malloc 64 00:04:33.949 buf 0x2000004ffec0 len 64 PASSED 00:04:33.949 malloc 4194304 00:04:33.949 register 0x200000800000 6291456 00:04:33.949 buf 0x2000009fffc0 len 4194304 PASSED 00:04:33.949 free 0x2000004fffc0 3145728 00:04:33.949 free 0x2000004ffec0 64 00:04:33.949 unregister 0x200000400000 4194304 PASSED 00:04:33.949 free 0x2000009fffc0 4194304 00:04:33.949 unregister 0x200000800000 6291456 PASSED 00:04:33.949 malloc 8388608 00:04:33.949 register 0x200000400000 10485760 00:04:33.949 buf 0x2000005fffc0 len 8388608 PASSED 00:04:33.949 free 0x2000005fffc0 8388608 00:04:33.949 unregister 0x200000400000 10485760 PASSED 00:04:33.949 passed 00:04:33.949 00:04:33.949 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.949 suites 1 1 n/a 0 0 00:04:33.949 tests 1 1 1 0 0 00:04:33.949 asserts 15 15 15 0 n/a 00:04:33.949 00:04:33.949 Elapsed time = 0.040 seconds 00:04:33.949 ************************************ 00:04:33.949 END TEST env_mem_callbacks 00:04:33.949 ************************************ 00:04:33.949 00:04:33.949 real 0m0.205s 00:04:33.949 user 0m0.063s 00:04:33.949 sys 0m0.040s 00:04:33.949 12:30:33 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.949 12:30:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:33.949 ************************************ 00:04:33.949 END TEST env 00:04:33.949 ************************************ 00:04:33.949 00:04:33.949 real 0m6.049s 00:04:33.949 user 0m4.689s 00:04:33.949 sys 0m0.979s 00:04:33.949 12:30:33 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.949 12:30:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.212 12:30:33 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:34.212 12:30:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.212 12:30:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.212 12:30:33 -- common/autotest_common.sh@10 -- # set +x 00:04:34.212 ************************************ 00:04:34.212 START TEST rpc 00:04:34.212 ************************************ 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:34.212 * Looking for test storage... 00:04:34.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:34.212 12:30:33 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.212 12:30:33 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.212 12:30:33 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.212 12:30:33 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.212 12:30:33 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.212 12:30:33 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.212 12:30:33 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.212 12:30:33 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.212 12:30:33 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.212 12:30:33 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.212 12:30:33 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.212 12:30:33 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.212 12:30:33 rpc -- scripts/common.sh@345 -- # : 1 00:04:34.212 12:30:33 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.212 12:30:33 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.212 12:30:33 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.212 12:30:33 rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.212 12:30:33 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.212 12:30:33 rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.212 12:30:33 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.212 12:30:33 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.212 12:30:33 rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.212 12:30:33 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.212 12:30:33 rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.212 12:30:33 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.212 12:30:33 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.212 12:30:33 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.212 12:30:33 rpc -- scripts/common.sh@368 -- # return 0 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:34.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.212 --rc genhtml_branch_coverage=1 00:04:34.212 --rc genhtml_function_coverage=1 00:04:34.212 --rc genhtml_legend=1 00:04:34.212 --rc geninfo_all_blocks=1 00:04:34.212 --rc geninfo_unexecuted_blocks=1 00:04:34.212 00:04:34.212 ' 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:34.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.212 --rc genhtml_branch_coverage=1 00:04:34.212 --rc genhtml_function_coverage=1 00:04:34.212 --rc genhtml_legend=1 00:04:34.212 --rc geninfo_all_blocks=1 00:04:34.212 --rc geninfo_unexecuted_blocks=1 00:04:34.212 00:04:34.212 ' 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:34.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.212 --rc genhtml_branch_coverage=1 00:04:34.212 --rc genhtml_function_coverage=1 00:04:34.212 --rc genhtml_legend=1 00:04:34.212 --rc geninfo_all_blocks=1 00:04:34.212 --rc geninfo_unexecuted_blocks=1 00:04:34.212 00:04:34.212 ' 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:34.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.212 --rc genhtml_branch_coverage=1 00:04:34.212 --rc genhtml_function_coverage=1 00:04:34.212 --rc genhtml_legend=1 00:04:34.212 --rc geninfo_all_blocks=1 00:04:34.212 --rc geninfo_unexecuted_blocks=1 00:04:34.212 00:04:34.212 ' 00:04:34.212 12:30:33 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58906 00:04:34.212 12:30:33 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.212 12:30:33 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58906 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@835 -- # '[' -z 58906 ']' 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.212 12:30:33 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:34.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.212 12:30:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.212 [2024-12-14 12:30:33.938264] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:34.212 [2024-12-14 12:30:33.938656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58906 ] 00:04:34.473 [2024-12-14 12:30:34.098194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.473 [2024-12-14 12:30:34.196808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:34.473 [2024-12-14 12:30:34.197019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58906' to capture a snapshot of events at runtime. 00:04:34.473 [2024-12-14 12:30:34.197035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:34.473 [2024-12-14 12:30:34.197044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:34.473 [2024-12-14 12:30:34.197051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58906 for offline analysis/debug. 00:04:34.473 [2024-12-14 12:30:34.197936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.409 12:30:34 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.409 12:30:34 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:35.409 12:30:34 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.409 12:30:34 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.409 12:30:34 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:35.409 12:30:34 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:35.409 12:30:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.409 12:30:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.409 12:30:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.409 ************************************ 00:04:35.409 START TEST rpc_integrity 00:04:35.409 ************************************ 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.409 { 00:04:35.409 "name": "Malloc0", 00:04:35.409 "aliases": [ 00:04:35.409 "5f30cc6d-c5e8-4198-912b-7db707f62b8b" 00:04:35.409 ], 00:04:35.409 "product_name": "Malloc disk", 00:04:35.409 "block_size": 512, 00:04:35.409 "num_blocks": 16384, 00:04:35.409 "uuid": "5f30cc6d-c5e8-4198-912b-7db707f62b8b", 00:04:35.409 "assigned_rate_limits": { 00:04:35.409 "rw_ios_per_sec": 0, 00:04:35.409 "rw_mbytes_per_sec": 0, 00:04:35.409 "r_mbytes_per_sec": 0, 00:04:35.409 "w_mbytes_per_sec": 0 00:04:35.409 }, 00:04:35.409 "claimed": false, 00:04:35.409 "zoned": false, 00:04:35.409 "supported_io_types": { 00:04:35.409 "read": true, 00:04:35.409 "write": true, 00:04:35.409 "unmap": true, 00:04:35.409 "flush": true, 00:04:35.409 "reset": true, 00:04:35.409 "nvme_admin": false, 00:04:35.409 "nvme_io": false, 00:04:35.409 "nvme_io_md": false, 00:04:35.409 "write_zeroes": true, 00:04:35.409 "zcopy": true, 00:04:35.409 "get_zone_info": false, 00:04:35.409 "zone_management": false, 00:04:35.409 "zone_append": false, 00:04:35.409 "compare": false, 00:04:35.409 "compare_and_write": false, 00:04:35.409 "abort": true, 00:04:35.409 "seek_hole": false, 00:04:35.409 "seek_data": false, 00:04:35.409 "copy": true, 00:04:35.409 "nvme_iov_md": false 00:04:35.409 }, 00:04:35.409 "memory_domains": [ 00:04:35.409 { 00:04:35.409 "dma_device_id": "system", 00:04:35.409 "dma_device_type": 1 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.409 "dma_device_type": 2 00:04:35.409 } 00:04:35.409 ], 00:04:35.409 "driver_specific": {} 00:04:35.409 } 00:04:35.409 ]' 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.409 [2024-12-14 12:30:34.899559] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:35.409 [2024-12-14 12:30:34.899619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.409 [2024-12-14 12:30:34.899642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:35.409 [2024-12-14 12:30:34.899653] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.409 [2024-12-14 12:30:34.901827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.409 [2024-12-14 12:30:34.901972] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.409 Passthru0 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.409 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.409 { 00:04:35.409 "name": "Malloc0", 00:04:35.409 "aliases": [ 00:04:35.409 "5f30cc6d-c5e8-4198-912b-7db707f62b8b" 00:04:35.409 ], 00:04:35.409 "product_name": "Malloc disk", 00:04:35.409 "block_size": 512, 00:04:35.409 "num_blocks": 16384, 00:04:35.409 "uuid": "5f30cc6d-c5e8-4198-912b-7db707f62b8b", 00:04:35.409 "assigned_rate_limits": { 00:04:35.409 "rw_ios_per_sec": 0, 00:04:35.409 "rw_mbytes_per_sec": 0, 00:04:35.409 "r_mbytes_per_sec": 0, 00:04:35.409 "w_mbytes_per_sec": 0 00:04:35.409 }, 00:04:35.409 "claimed": true, 00:04:35.409 "claim_type": "exclusive_write", 00:04:35.409 "zoned": false, 00:04:35.409 "supported_io_types": { 00:04:35.409 "read": true, 00:04:35.409 "write": true, 00:04:35.409 "unmap": true, 00:04:35.409 "flush": true, 00:04:35.409 "reset": true, 00:04:35.409 "nvme_admin": false, 00:04:35.409 "nvme_io": false, 00:04:35.409 "nvme_io_md": false, 00:04:35.409 "write_zeroes": true, 00:04:35.409 "zcopy": true, 00:04:35.409 "get_zone_info": false, 00:04:35.409 "zone_management": false, 00:04:35.409 "zone_append": false, 00:04:35.409 "compare": false, 00:04:35.409 "compare_and_write": false, 00:04:35.409 "abort": true, 00:04:35.409 "seek_hole": false, 00:04:35.409 "seek_data": false, 00:04:35.409 "copy": true, 00:04:35.409 "nvme_iov_md": false 00:04:35.409 }, 00:04:35.409 "memory_domains": [ 00:04:35.409 { 00:04:35.409 "dma_device_id": "system", 00:04:35.409 "dma_device_type": 1 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.409 "dma_device_type": 2 00:04:35.409 } 00:04:35.409 ], 00:04:35.409 "driver_specific": {} 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "name": "Passthru0", 00:04:35.409 "aliases": [ 00:04:35.409 "c4436a9c-319a-5a4c-acfe-eee0106e79fd" 00:04:35.409 ], 00:04:35.409 "product_name": "passthru", 00:04:35.409 "block_size": 512, 00:04:35.409 "num_blocks": 16384, 00:04:35.409 "uuid": "c4436a9c-319a-5a4c-acfe-eee0106e79fd", 00:04:35.409 "assigned_rate_limits": { 00:04:35.409 "rw_ios_per_sec": 0, 00:04:35.409 "rw_mbytes_per_sec": 0, 00:04:35.409 "r_mbytes_per_sec": 0, 00:04:35.409 "w_mbytes_per_sec": 0 00:04:35.409 }, 00:04:35.409 "claimed": false, 00:04:35.409 "zoned": false, 00:04:35.409 "supported_io_types": { 00:04:35.409 "read": true, 00:04:35.409 "write": true, 00:04:35.409 "unmap": true, 00:04:35.409 "flush": true, 00:04:35.409 "reset": true, 00:04:35.409 "nvme_admin": false, 00:04:35.409 "nvme_io": false, 00:04:35.409 "nvme_io_md": false, 00:04:35.409 "write_zeroes": true, 00:04:35.409 "zcopy": true, 00:04:35.409 "get_zone_info": false, 00:04:35.409 "zone_management": false, 00:04:35.409 "zone_append": false, 00:04:35.409 "compare": false, 00:04:35.409 "compare_and_write": false, 00:04:35.409 "abort": true, 00:04:35.409 "seek_hole": false, 00:04:35.409 "seek_data": false, 00:04:35.409 "copy": true, 00:04:35.409 "nvme_iov_md": false 00:04:35.409 }, 00:04:35.409 "memory_domains": [ 00:04:35.409 { 00:04:35.409 "dma_device_id": "system", 00:04:35.409 "dma_device_type": 1 00:04:35.409 }, 00:04:35.409 { 00:04:35.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.409 "dma_device_type": 2 00:04:35.409 } 00:04:35.409 ], 00:04:35.409 "driver_specific": { 00:04:35.409 "passthru": { 00:04:35.409 "name": "Passthru0", 00:04:35.409 "base_bdev_name": "Malloc0" 00:04:35.409 } 00:04:35.409 } 00:04:35.409 } 00:04:35.409 ]' 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.409 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.410 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.410 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.410 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.410 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:35.410 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.410 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.410 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.410 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.410 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.410 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.410 12:30:34 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.410 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.410 12:30:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.410 ************************************ 00:04:35.410 END TEST rpc_integrity 00:04:35.410 ************************************ 00:04:35.410 12:30:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.410 00:04:35.410 real 0m0.239s 00:04:35.410 user 0m0.126s 00:04:35.410 sys 0m0.035s 00:04:35.410 12:30:35 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.410 12:30:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.410 12:30:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:35.410 12:30:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.410 12:30:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.410 12:30:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.410 ************************************ 00:04:35.410 START TEST rpc_plugins 00:04:35.410 ************************************ 00:04:35.410 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:35.410 12:30:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:35.410 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.410 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.410 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.410 12:30:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:35.410 12:30:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:35.410 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.410 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.410 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.410 12:30:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:35.410 { 00:04:35.410 "name": "Malloc1", 00:04:35.410 "aliases": [ 00:04:35.410 "ae5e8683-ea6c-4dee-9971-c8668235039c" 00:04:35.410 ], 00:04:35.410 "product_name": "Malloc disk", 00:04:35.410 "block_size": 4096, 00:04:35.410 "num_blocks": 256, 00:04:35.410 "uuid": "ae5e8683-ea6c-4dee-9971-c8668235039c", 00:04:35.410 "assigned_rate_limits": { 00:04:35.410 "rw_ios_per_sec": 0, 00:04:35.410 "rw_mbytes_per_sec": 0, 00:04:35.410 "r_mbytes_per_sec": 0, 00:04:35.410 "w_mbytes_per_sec": 0 00:04:35.410 }, 00:04:35.410 "claimed": false, 00:04:35.410 "zoned": false, 00:04:35.410 "supported_io_types": { 00:04:35.410 "read": true, 00:04:35.410 "write": true, 00:04:35.410 "unmap": true, 00:04:35.410 "flush": true, 00:04:35.410 "reset": true, 00:04:35.410 "nvme_admin": false, 00:04:35.410 "nvme_io": false, 00:04:35.410 "nvme_io_md": false, 00:04:35.410 "write_zeroes": true, 00:04:35.410 "zcopy": true, 00:04:35.410 "get_zone_info": false, 00:04:35.410 "zone_management": false, 00:04:35.410 "zone_append": false, 00:04:35.410 "compare": false, 00:04:35.410 "compare_and_write": false, 00:04:35.410 "abort": true, 00:04:35.410 "seek_hole": false, 00:04:35.410 "seek_data": false, 00:04:35.410 "copy": true, 00:04:35.410 "nvme_iov_md": false 00:04:35.410 }, 00:04:35.410 "memory_domains": [ 00:04:35.410 { 00:04:35.410 "dma_device_id": "system", 00:04:35.410 "dma_device_type": 1 00:04:35.410 }, 00:04:35.410 { 00:04:35.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.410 "dma_device_type": 2 00:04:35.410 } 00:04:35.410 ], 00:04:35.410 "driver_specific": {} 00:04:35.410 } 00:04:35.410 ]' 00:04:35.410 12:30:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:35.410 12:30:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:35.410 12:30:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:35.410 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.410 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.410 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.668 12:30:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:35.668 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.668 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.668 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.668 12:30:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:35.668 12:30:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:35.668 ************************************ 00:04:35.668 END TEST rpc_plugins 00:04:35.668 ************************************ 00:04:35.668 12:30:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:35.668 00:04:35.668 real 0m0.119s 00:04:35.668 user 0m0.063s 00:04:35.668 sys 0m0.018s 00:04:35.668 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.668 12:30:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:35.668 12:30:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:35.668 12:30:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.668 12:30:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.668 12:30:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.668 ************************************ 00:04:35.668 START TEST rpc_trace_cmd_test 00:04:35.668 ************************************ 00:04:35.668 12:30:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:35.668 12:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:35.668 12:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:35.668 12:30:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.668 12:30:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.668 12:30:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.668 12:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:35.668 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58906", 00:04:35.668 "tpoint_group_mask": "0x8", 00:04:35.668 "iscsi_conn": { 00:04:35.668 "mask": "0x2", 00:04:35.668 "tpoint_mask": "0x0" 00:04:35.668 }, 00:04:35.668 "scsi": { 00:04:35.668 "mask": "0x4", 00:04:35.668 "tpoint_mask": "0x0" 00:04:35.668 }, 00:04:35.668 "bdev": { 00:04:35.668 "mask": "0x8", 00:04:35.668 "tpoint_mask": "0xffffffffffffffff" 00:04:35.668 }, 00:04:35.668 "nvmf_rdma": { 00:04:35.668 "mask": "0x10", 00:04:35.668 "tpoint_mask": "0x0" 00:04:35.668 }, 00:04:35.668 "nvmf_tcp": { 00:04:35.668 "mask": "0x20", 00:04:35.668 "tpoint_mask": "0x0" 00:04:35.668 }, 00:04:35.668 "ftl": { 00:04:35.668 "mask": "0x40", 00:04:35.668 "tpoint_mask": "0x0" 00:04:35.668 }, 00:04:35.668 "blobfs": { 00:04:35.668 "mask": "0x80", 00:04:35.668 "tpoint_mask": "0x0" 00:04:35.668 }, 00:04:35.668 "dsa": { 00:04:35.668 "mask": "0x200", 00:04:35.668 "tpoint_mask": "0x0" 00:04:35.668 }, 00:04:35.668 "thread": { 00:04:35.669 "mask": "0x400", 00:04:35.669 "tpoint_mask": "0x0" 00:04:35.669 }, 00:04:35.669 "nvme_pcie": { 00:04:35.669 "mask": "0x800", 00:04:35.669 "tpoint_mask": "0x0" 00:04:35.669 }, 00:04:35.669 "iaa": { 00:04:35.669 "mask": "0x1000", 00:04:35.669 "tpoint_mask": "0x0" 00:04:35.669 }, 00:04:35.669 "nvme_tcp": { 00:04:35.669 "mask": "0x2000", 00:04:35.669 "tpoint_mask": "0x0" 00:04:35.669 }, 00:04:35.669 "bdev_nvme": { 00:04:35.669 "mask": "0x4000", 00:04:35.669 "tpoint_mask": "0x0" 00:04:35.669 }, 00:04:35.669 "sock": { 00:04:35.669 "mask": "0x8000", 00:04:35.669 "tpoint_mask": "0x0" 00:04:35.669 }, 00:04:35.669 "blob": { 00:04:35.669 "mask": "0x10000", 00:04:35.669 "tpoint_mask": "0x0" 00:04:35.669 }, 00:04:35.669 "bdev_raid": { 00:04:35.669 "mask": "0x20000", 00:04:35.669 "tpoint_mask": "0x0" 00:04:35.669 }, 00:04:35.669 "scheduler": { 00:04:35.669 "mask": "0x40000", 00:04:35.669 "tpoint_mask": "0x0" 00:04:35.669 } 00:04:35.669 }' 00:04:35.669 12:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:35.669 12:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:35.669 12:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:35.669 12:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:35.669 12:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:35.669 12:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:35.669 12:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:35.669 12:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:35.669 12:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:35.669 ************************************ 00:04:35.669 END TEST rpc_trace_cmd_test 00:04:35.669 ************************************ 00:04:35.669 12:30:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:35.669 00:04:35.669 real 0m0.167s 00:04:35.669 user 0m0.143s 00:04:35.669 sys 0m0.018s 00:04:35.669 12:30:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.669 12:30:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.927 12:30:35 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:35.927 12:30:35 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:35.927 12:30:35 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:35.927 12:30:35 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.927 12:30:35 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.927 12:30:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.927 ************************************ 00:04:35.927 START TEST rpc_daemon_integrity 00:04:35.927 ************************************ 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.927 { 00:04:35.927 "name": "Malloc2", 00:04:35.927 "aliases": [ 00:04:35.927 "c6f81e52-ef35-427f-8663-75bd92b63ca3" 00:04:35.927 ], 00:04:35.927 "product_name": "Malloc disk", 00:04:35.927 "block_size": 512, 00:04:35.927 "num_blocks": 16384, 00:04:35.927 "uuid": "c6f81e52-ef35-427f-8663-75bd92b63ca3", 00:04:35.927 "assigned_rate_limits": { 00:04:35.927 "rw_ios_per_sec": 0, 00:04:35.927 "rw_mbytes_per_sec": 0, 00:04:35.927 "r_mbytes_per_sec": 0, 00:04:35.927 "w_mbytes_per_sec": 0 00:04:35.927 }, 00:04:35.927 "claimed": false, 00:04:35.927 "zoned": false, 00:04:35.927 "supported_io_types": { 00:04:35.927 "read": true, 00:04:35.927 "write": true, 00:04:35.927 "unmap": true, 00:04:35.927 "flush": true, 00:04:35.927 "reset": true, 00:04:35.927 "nvme_admin": false, 00:04:35.927 "nvme_io": false, 00:04:35.927 "nvme_io_md": false, 00:04:35.927 "write_zeroes": true, 00:04:35.927 "zcopy": true, 00:04:35.927 "get_zone_info": false, 00:04:35.927 "zone_management": false, 00:04:35.927 "zone_append": false, 00:04:35.927 "compare": false, 00:04:35.927 "compare_and_write": false, 00:04:35.927 "abort": true, 00:04:35.927 "seek_hole": false, 00:04:35.927 "seek_data": false, 00:04:35.927 "copy": true, 00:04:35.927 "nvme_iov_md": false 00:04:35.927 }, 00:04:35.927 "memory_domains": [ 00:04:35.927 { 00:04:35.927 "dma_device_id": "system", 00:04:35.927 "dma_device_type": 1 00:04:35.927 }, 00:04:35.927 { 00:04:35.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.927 "dma_device_type": 2 00:04:35.927 } 00:04:35.927 ], 00:04:35.927 "driver_specific": {} 00:04:35.927 } 00:04:35.927 ]' 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.927 [2024-12-14 12:30:35.542919] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:35.927 [2024-12-14 12:30:35.542976] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.927 [2024-12-14 12:30:35.542996] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:35.927 [2024-12-14 12:30:35.543007] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.927 [2024-12-14 12:30:35.545159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.927 [2024-12-14 12:30:35.545195] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.927 Passthru0 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.927 { 00:04:35.927 "name": "Malloc2", 00:04:35.927 "aliases": [ 00:04:35.927 "c6f81e52-ef35-427f-8663-75bd92b63ca3" 00:04:35.927 ], 00:04:35.927 "product_name": "Malloc disk", 00:04:35.927 "block_size": 512, 00:04:35.927 "num_blocks": 16384, 00:04:35.927 "uuid": "c6f81e52-ef35-427f-8663-75bd92b63ca3", 00:04:35.927 "assigned_rate_limits": { 00:04:35.927 "rw_ios_per_sec": 0, 00:04:35.927 "rw_mbytes_per_sec": 0, 00:04:35.927 "r_mbytes_per_sec": 0, 00:04:35.927 "w_mbytes_per_sec": 0 00:04:35.927 }, 00:04:35.927 "claimed": true, 00:04:35.927 "claim_type": "exclusive_write", 00:04:35.927 "zoned": false, 00:04:35.927 "supported_io_types": { 00:04:35.927 "read": true, 00:04:35.927 "write": true, 00:04:35.927 "unmap": true, 00:04:35.927 "flush": true, 00:04:35.927 "reset": true, 00:04:35.927 "nvme_admin": false, 00:04:35.927 "nvme_io": false, 00:04:35.927 "nvme_io_md": false, 00:04:35.927 "write_zeroes": true, 00:04:35.927 "zcopy": true, 00:04:35.927 "get_zone_info": false, 00:04:35.927 "zone_management": false, 00:04:35.927 "zone_append": false, 00:04:35.927 "compare": false, 00:04:35.927 "compare_and_write": false, 00:04:35.927 "abort": true, 00:04:35.927 "seek_hole": false, 00:04:35.927 "seek_data": false, 00:04:35.927 "copy": true, 00:04:35.927 "nvme_iov_md": false 00:04:35.927 }, 00:04:35.927 "memory_domains": [ 00:04:35.927 { 00:04:35.927 "dma_device_id": "system", 00:04:35.927 "dma_device_type": 1 00:04:35.927 }, 00:04:35.927 { 00:04:35.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.927 "dma_device_type": 2 00:04:35.927 } 00:04:35.927 ], 00:04:35.927 "driver_specific": {} 00:04:35.927 }, 00:04:35.927 { 00:04:35.927 "name": "Passthru0", 00:04:35.927 "aliases": [ 00:04:35.927 "94412f32-b56d-5d95-bcd4-6431d63904ea" 00:04:35.927 ], 00:04:35.927 "product_name": "passthru", 00:04:35.927 "block_size": 512, 00:04:35.927 "num_blocks": 16384, 00:04:35.927 "uuid": "94412f32-b56d-5d95-bcd4-6431d63904ea", 00:04:35.927 "assigned_rate_limits": { 00:04:35.927 "rw_ios_per_sec": 0, 00:04:35.927 "rw_mbytes_per_sec": 0, 00:04:35.927 "r_mbytes_per_sec": 0, 00:04:35.927 "w_mbytes_per_sec": 0 00:04:35.927 }, 00:04:35.927 "claimed": false, 00:04:35.927 "zoned": false, 00:04:35.927 "supported_io_types": { 00:04:35.927 "read": true, 00:04:35.927 "write": true, 00:04:35.927 "unmap": true, 00:04:35.927 "flush": true, 00:04:35.927 "reset": true, 00:04:35.927 "nvme_admin": false, 00:04:35.927 "nvme_io": false, 00:04:35.927 "nvme_io_md": false, 00:04:35.927 "write_zeroes": true, 00:04:35.927 "zcopy": true, 00:04:35.927 "get_zone_info": false, 00:04:35.927 "zone_management": false, 00:04:35.927 "zone_append": false, 00:04:35.927 "compare": false, 00:04:35.927 "compare_and_write": false, 00:04:35.927 "abort": true, 00:04:35.927 "seek_hole": false, 00:04:35.927 "seek_data": false, 00:04:35.927 "copy": true, 00:04:35.927 "nvme_iov_md": false 00:04:35.927 }, 00:04:35.927 "memory_domains": [ 00:04:35.927 { 00:04:35.927 "dma_device_id": "system", 00:04:35.927 "dma_device_type": 1 00:04:35.927 }, 00:04:35.927 { 00:04:35.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.927 "dma_device_type": 2 00:04:35.927 } 00:04:35.927 ], 00:04:35.927 "driver_specific": { 00:04:35.927 "passthru": { 00:04:35.927 "name": "Passthru0", 00:04:35.927 "base_bdev_name": "Malloc2" 00:04:35.927 } 00:04:35.927 } 00:04:35.927 } 00:04:35.927 ]' 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.927 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.928 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.928 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.928 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.928 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.928 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.928 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.928 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:36.185 ************************************ 00:04:36.185 END TEST rpc_daemon_integrity 00:04:36.185 ************************************ 00:04:36.185 12:30:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:36.185 00:04:36.185 real 0m0.233s 00:04:36.185 user 0m0.127s 00:04:36.185 sys 0m0.026s 00:04:36.185 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.185 12:30:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:36.185 12:30:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:36.185 12:30:35 rpc -- rpc/rpc.sh@84 -- # killprocess 58906 00:04:36.185 12:30:35 rpc -- common/autotest_common.sh@954 -- # '[' -z 58906 ']' 00:04:36.185 12:30:35 rpc -- common/autotest_common.sh@958 -- # kill -0 58906 00:04:36.185 12:30:35 rpc -- common/autotest_common.sh@959 -- # uname 00:04:36.185 12:30:35 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.185 12:30:35 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58906 00:04:36.185 killing process with pid 58906 00:04:36.185 12:30:35 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.185 12:30:35 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.185 12:30:35 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58906' 00:04:36.185 12:30:35 rpc -- common/autotest_common.sh@973 -- # kill 58906 00:04:36.185 12:30:35 rpc -- common/autotest_common.sh@978 -- # wait 58906 00:04:37.559 ************************************ 00:04:37.559 END TEST rpc 00:04:37.559 ************************************ 00:04:37.559 00:04:37.559 real 0m3.383s 00:04:37.559 user 0m3.792s 00:04:37.559 sys 0m0.583s 00:04:37.559 12:30:37 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.559 12:30:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.559 12:30:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:37.559 12:30:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.559 12:30:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.559 12:30:37 -- common/autotest_common.sh@10 -- # set +x 00:04:37.559 ************************************ 00:04:37.559 START TEST skip_rpc 00:04:37.559 ************************************ 00:04:37.559 12:30:37 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:37.559 * Looking for test storage... 00:04:37.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.559 12:30:37 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.559 12:30:37 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.559 12:30:37 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.559 12:30:37 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.559 12:30:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:37.559 12:30:37 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.559 12:30:37 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.559 --rc genhtml_branch_coverage=1 00:04:37.559 --rc genhtml_function_coverage=1 00:04:37.559 --rc genhtml_legend=1 00:04:37.559 --rc geninfo_all_blocks=1 00:04:37.559 --rc geninfo_unexecuted_blocks=1 00:04:37.559 00:04:37.559 ' 00:04:37.560 12:30:37 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.560 --rc genhtml_branch_coverage=1 00:04:37.560 --rc genhtml_function_coverage=1 00:04:37.560 --rc genhtml_legend=1 00:04:37.560 --rc geninfo_all_blocks=1 00:04:37.560 --rc geninfo_unexecuted_blocks=1 00:04:37.560 00:04:37.560 ' 00:04:37.560 12:30:37 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.560 --rc genhtml_branch_coverage=1 00:04:37.560 --rc genhtml_function_coverage=1 00:04:37.560 --rc genhtml_legend=1 00:04:37.560 --rc geninfo_all_blocks=1 00:04:37.560 --rc geninfo_unexecuted_blocks=1 00:04:37.560 00:04:37.560 ' 00:04:37.560 12:30:37 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.560 --rc genhtml_branch_coverage=1 00:04:37.560 --rc genhtml_function_coverage=1 00:04:37.560 --rc genhtml_legend=1 00:04:37.560 --rc geninfo_all_blocks=1 00:04:37.560 --rc geninfo_unexecuted_blocks=1 00:04:37.560 00:04:37.560 ' 00:04:37.560 12:30:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:37.560 12:30:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:37.560 12:30:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:37.560 12:30:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.560 12:30:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.560 12:30:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.560 ************************************ 00:04:37.560 START TEST skip_rpc 00:04:37.560 ************************************ 00:04:37.560 12:30:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:37.560 12:30:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59113 00:04:37.560 12:30:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.560 12:30:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:37.560 12:30:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:37.873 [2024-12-14 12:30:37.366119] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:37.873 [2024-12-14 12:30:37.366232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59113 ] 00:04:37.873 [2024-12-14 12:30:37.521295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.144 [2024-12-14 12:30:37.601317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59113 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59113 ']' 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59113 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59113 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.407 killing process with pid 59113 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59113' 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59113 00:04:43.407 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59113 00:04:43.977 00:04:43.977 real 0m6.212s 00:04:43.977 user 0m5.866s 00:04:43.977 sys 0m0.245s 00:04:43.977 12:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.977 12:30:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.977 ************************************ 00:04:43.977 END TEST skip_rpc 00:04:43.977 ************************************ 00:04:43.977 12:30:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:43.977 12:30:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.977 12:30:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.977 12:30:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.977 ************************************ 00:04:43.977 START TEST skip_rpc_with_json 00:04:43.977 ************************************ 00:04:43.977 12:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:43.977 12:30:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:43.977 12:30:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59212 00:04:43.977 12:30:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.977 12:30:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59212 00:04:43.977 12:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59212 ']' 00:04:43.977 12:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.977 12:30:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.977 12:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.977 12:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.977 12:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.977 12:30:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.977 [2024-12-14 12:30:43.616197] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:43.977 [2024-12-14 12:30:43.616312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59212 ] 00:04:44.237 [2024-12-14 12:30:43.771508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.237 [2024-12-14 12:30:43.853416] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.802 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.802 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:44.802 12:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:44.802 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.802 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.802 [2024-12-14 12:30:44.448758] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:44.802 request: 00:04:44.802 { 00:04:44.802 "trtype": "tcp", 00:04:44.802 "method": "nvmf_get_transports", 00:04:44.802 "req_id": 1 00:04:44.802 } 00:04:44.802 Got JSON-RPC error response 00:04:44.802 response: 00:04:44.802 { 00:04:44.802 "code": -19, 00:04:44.802 "message": "No such device" 00:04:44.802 } 00:04:44.802 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:44.802 12:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:44.802 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.802 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:44.802 [2024-12-14 12:30:44.460848] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:44.802 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.802 12:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:44.802 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.802 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.060 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.060 12:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:45.060 { 00:04:45.060 "subsystems": [ 00:04:45.060 { 00:04:45.060 "subsystem": "fsdev", 00:04:45.060 "config": [ 00:04:45.060 { 00:04:45.060 "method": "fsdev_set_opts", 00:04:45.060 "params": { 00:04:45.060 "fsdev_io_pool_size": 65535, 00:04:45.060 "fsdev_io_cache_size": 256 00:04:45.060 } 00:04:45.060 } 00:04:45.060 ] 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "subsystem": "keyring", 00:04:45.060 "config": [] 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "subsystem": "iobuf", 00:04:45.060 "config": [ 00:04:45.060 { 00:04:45.060 "method": "iobuf_set_options", 00:04:45.060 "params": { 00:04:45.060 "small_pool_count": 8192, 00:04:45.060 "large_pool_count": 1024, 00:04:45.060 "small_bufsize": 8192, 00:04:45.060 "large_bufsize": 135168, 00:04:45.060 "enable_numa": false 00:04:45.060 } 00:04:45.060 } 00:04:45.060 ] 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "subsystem": "sock", 00:04:45.060 "config": [ 00:04:45.060 { 00:04:45.060 "method": "sock_set_default_impl", 00:04:45.060 "params": { 00:04:45.060 "impl_name": "posix" 00:04:45.060 } 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "method": "sock_impl_set_options", 00:04:45.060 "params": { 00:04:45.060 "impl_name": "ssl", 00:04:45.060 "recv_buf_size": 4096, 00:04:45.060 "send_buf_size": 4096, 00:04:45.060 "enable_recv_pipe": true, 00:04:45.060 "enable_quickack": false, 00:04:45.060 "enable_placement_id": 0, 00:04:45.060 "enable_zerocopy_send_server": true, 00:04:45.060 "enable_zerocopy_send_client": false, 00:04:45.060 "zerocopy_threshold": 0, 00:04:45.060 "tls_version": 0, 00:04:45.060 "enable_ktls": false 00:04:45.060 } 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "method": "sock_impl_set_options", 00:04:45.060 "params": { 00:04:45.060 "impl_name": "posix", 00:04:45.060 "recv_buf_size": 2097152, 00:04:45.060 "send_buf_size": 2097152, 00:04:45.060 "enable_recv_pipe": true, 00:04:45.060 "enable_quickack": false, 00:04:45.060 "enable_placement_id": 0, 00:04:45.060 "enable_zerocopy_send_server": true, 00:04:45.060 "enable_zerocopy_send_client": false, 00:04:45.060 "zerocopy_threshold": 0, 00:04:45.060 "tls_version": 0, 00:04:45.060 "enable_ktls": false 00:04:45.060 } 00:04:45.060 } 00:04:45.060 ] 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "subsystem": "vmd", 00:04:45.060 "config": [] 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "subsystem": "accel", 00:04:45.060 "config": [ 00:04:45.060 { 00:04:45.060 "method": "accel_set_options", 00:04:45.060 "params": { 00:04:45.060 "small_cache_size": 128, 00:04:45.060 "large_cache_size": 16, 00:04:45.060 "task_count": 2048, 00:04:45.060 "sequence_count": 2048, 00:04:45.060 "buf_count": 2048 00:04:45.060 } 00:04:45.060 } 00:04:45.060 ] 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "subsystem": "bdev", 00:04:45.060 "config": [ 00:04:45.060 { 00:04:45.060 "method": "bdev_set_options", 00:04:45.060 "params": { 00:04:45.060 "bdev_io_pool_size": 65535, 00:04:45.060 "bdev_io_cache_size": 256, 00:04:45.060 "bdev_auto_examine": true, 00:04:45.060 "iobuf_small_cache_size": 128, 00:04:45.060 "iobuf_large_cache_size": 16 00:04:45.060 } 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "method": "bdev_raid_set_options", 00:04:45.060 "params": { 00:04:45.060 "process_window_size_kb": 1024, 00:04:45.060 "process_max_bandwidth_mb_sec": 0 00:04:45.060 } 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "method": "bdev_iscsi_set_options", 00:04:45.060 "params": { 00:04:45.060 "timeout_sec": 30 00:04:45.060 } 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "method": "bdev_nvme_set_options", 00:04:45.060 "params": { 00:04:45.060 "action_on_timeout": "none", 00:04:45.060 "timeout_us": 0, 00:04:45.060 "timeout_admin_us": 0, 00:04:45.060 "keep_alive_timeout_ms": 10000, 00:04:45.060 "arbitration_burst": 0, 00:04:45.060 "low_priority_weight": 0, 00:04:45.060 "medium_priority_weight": 0, 00:04:45.060 "high_priority_weight": 0, 00:04:45.060 "nvme_adminq_poll_period_us": 10000, 00:04:45.060 "nvme_ioq_poll_period_us": 0, 00:04:45.060 "io_queue_requests": 0, 00:04:45.060 "delay_cmd_submit": true, 00:04:45.060 "transport_retry_count": 4, 00:04:45.060 "bdev_retry_count": 3, 00:04:45.060 "transport_ack_timeout": 0, 00:04:45.060 "ctrlr_loss_timeout_sec": 0, 00:04:45.060 "reconnect_delay_sec": 0, 00:04:45.060 "fast_io_fail_timeout_sec": 0, 00:04:45.060 "disable_auto_failback": false, 00:04:45.060 "generate_uuids": false, 00:04:45.060 "transport_tos": 0, 00:04:45.060 "nvme_error_stat": false, 00:04:45.060 "rdma_srq_size": 0, 00:04:45.060 "io_path_stat": false, 00:04:45.060 "allow_accel_sequence": false, 00:04:45.060 "rdma_max_cq_size": 0, 00:04:45.060 "rdma_cm_event_timeout_ms": 0, 00:04:45.060 "dhchap_digests": [ 00:04:45.060 "sha256", 00:04:45.060 "sha384", 00:04:45.060 "sha512" 00:04:45.060 ], 00:04:45.060 "dhchap_dhgroups": [ 00:04:45.060 "null", 00:04:45.060 "ffdhe2048", 00:04:45.060 "ffdhe3072", 00:04:45.060 "ffdhe4096", 00:04:45.060 "ffdhe6144", 00:04:45.060 "ffdhe8192" 00:04:45.060 ], 00:04:45.060 "rdma_umr_per_io": false 00:04:45.060 } 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "method": "bdev_nvme_set_hotplug", 00:04:45.060 "params": { 00:04:45.060 "period_us": 100000, 00:04:45.060 "enable": false 00:04:45.060 } 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "method": "bdev_wait_for_examine" 00:04:45.060 } 00:04:45.060 ] 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "subsystem": "scsi", 00:04:45.060 "config": null 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "subsystem": "scheduler", 00:04:45.060 "config": [ 00:04:45.060 { 00:04:45.060 "method": "framework_set_scheduler", 00:04:45.060 "params": { 00:04:45.060 "name": "static" 00:04:45.060 } 00:04:45.060 } 00:04:45.060 ] 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "subsystem": "vhost_scsi", 00:04:45.060 "config": [] 00:04:45.060 }, 00:04:45.060 { 00:04:45.060 "subsystem": "vhost_blk", 00:04:45.060 "config": [] 00:04:45.060 }, 00:04:45.061 { 00:04:45.061 "subsystem": "ublk", 00:04:45.061 "config": [] 00:04:45.061 }, 00:04:45.061 { 00:04:45.061 "subsystem": "nbd", 00:04:45.061 "config": [] 00:04:45.061 }, 00:04:45.061 { 00:04:45.061 "subsystem": "nvmf", 00:04:45.061 "config": [ 00:04:45.061 { 00:04:45.061 "method": "nvmf_set_config", 00:04:45.061 "params": { 00:04:45.061 "discovery_filter": "match_any", 00:04:45.061 "admin_cmd_passthru": { 00:04:45.061 "identify_ctrlr": false 00:04:45.061 }, 00:04:45.061 "dhchap_digests": [ 00:04:45.061 "sha256", 00:04:45.061 "sha384", 00:04:45.061 "sha512" 00:04:45.061 ], 00:04:45.061 "dhchap_dhgroups": [ 00:04:45.061 "null", 00:04:45.061 "ffdhe2048", 00:04:45.061 "ffdhe3072", 00:04:45.061 "ffdhe4096", 00:04:45.061 "ffdhe6144", 00:04:45.061 "ffdhe8192" 00:04:45.061 ] 00:04:45.061 } 00:04:45.061 }, 00:04:45.061 { 00:04:45.061 "method": "nvmf_set_max_subsystems", 00:04:45.061 "params": { 00:04:45.061 "max_subsystems": 1024 00:04:45.061 } 00:04:45.061 }, 00:04:45.061 { 00:04:45.061 "method": "nvmf_set_crdt", 00:04:45.061 "params": { 00:04:45.061 "crdt1": 0, 00:04:45.061 "crdt2": 0, 00:04:45.061 "crdt3": 0 00:04:45.061 } 00:04:45.061 }, 00:04:45.061 { 00:04:45.061 "method": "nvmf_create_transport", 00:04:45.061 "params": { 00:04:45.061 "trtype": "TCP", 00:04:45.061 "max_queue_depth": 128, 00:04:45.061 "max_io_qpairs_per_ctrlr": 127, 00:04:45.061 "in_capsule_data_size": 4096, 00:04:45.061 "max_io_size": 131072, 00:04:45.061 "io_unit_size": 131072, 00:04:45.061 "max_aq_depth": 128, 00:04:45.061 "num_shared_buffers": 511, 00:04:45.061 "buf_cache_size": 4294967295, 00:04:45.061 "dif_insert_or_strip": false, 00:04:45.061 "zcopy": false, 00:04:45.061 "c2h_success": true, 00:04:45.061 "sock_priority": 0, 00:04:45.061 "abort_timeout_sec": 1, 00:04:45.061 "ack_timeout": 0, 00:04:45.061 "data_wr_pool_size": 0 00:04:45.061 } 00:04:45.061 } 00:04:45.061 ] 00:04:45.061 }, 00:04:45.061 { 00:04:45.061 "subsystem": "iscsi", 00:04:45.061 "config": [ 00:04:45.061 { 00:04:45.061 "method": "iscsi_set_options", 00:04:45.061 "params": { 00:04:45.061 "node_base": "iqn.2016-06.io.spdk", 00:04:45.061 "max_sessions": 128, 00:04:45.061 "max_connections_per_session": 2, 00:04:45.061 "max_queue_depth": 64, 00:04:45.061 "default_time2wait": 2, 00:04:45.061 "default_time2retain": 20, 00:04:45.061 "first_burst_length": 8192, 00:04:45.061 "immediate_data": true, 00:04:45.061 "allow_duplicated_isid": false, 00:04:45.061 "error_recovery_level": 0, 00:04:45.061 "nop_timeout": 60, 00:04:45.061 "nop_in_interval": 30, 00:04:45.061 "disable_chap": false, 00:04:45.061 "require_chap": false, 00:04:45.061 "mutual_chap": false, 00:04:45.061 "chap_group": 0, 00:04:45.061 "max_large_datain_per_connection": 64, 00:04:45.061 "max_r2t_per_connection": 4, 00:04:45.061 "pdu_pool_size": 36864, 00:04:45.061 "immediate_data_pool_size": 16384, 00:04:45.061 "data_out_pool_size": 2048 00:04:45.061 } 00:04:45.061 } 00:04:45.061 ] 00:04:45.061 } 00:04:45.061 ] 00:04:45.061 } 00:04:45.061 12:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:45.061 12:30:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59212 00:04:45.061 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59212 ']' 00:04:45.061 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59212 00:04:45.061 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:45.061 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.061 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59212 00:04:45.061 killing process with pid 59212 00:04:45.061 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.061 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.061 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59212' 00:04:45.061 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59212 00:04:45.061 12:30:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59212 00:04:46.441 12:30:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59246 00:04:46.441 12:30:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:46.441 12:30:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:51.708 12:30:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59246 00:04:51.708 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59246 ']' 00:04:51.708 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59246 00:04:51.708 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:51.708 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.708 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59246 00:04:51.708 killing process with pid 59246 00:04:51.708 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.708 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.708 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59246' 00:04:51.708 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59246 00:04:51.708 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59246 00:04:52.273 12:30:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:52.273 12:30:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:52.273 00:04:52.273 real 0m8.464s 00:04:52.273 user 0m8.124s 00:04:52.273 sys 0m0.557s 00:04:52.273 12:30:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.273 12:30:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:52.273 ************************************ 00:04:52.273 END TEST skip_rpc_with_json 00:04:52.273 ************************************ 00:04:52.531 12:30:52 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:52.531 12:30:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.531 12:30:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.531 12:30:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.531 ************************************ 00:04:52.531 START TEST skip_rpc_with_delay 00:04:52.531 ************************************ 00:04:52.531 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:52.531 12:30:52 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.531 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:52.531 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.531 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.531 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.532 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.532 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.532 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.532 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.532 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.532 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:52.532 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:52.532 [2024-12-14 12:30:52.118708] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:52.532 ************************************ 00:04:52.532 END TEST skip_rpc_with_delay 00:04:52.532 ************************************ 00:04:52.532 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:52.532 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.532 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.532 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.532 00:04:52.532 real 0m0.119s 00:04:52.532 user 0m0.065s 00:04:52.532 sys 0m0.053s 00:04:52.532 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.532 12:30:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:52.532 12:30:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:52.532 12:30:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:52.532 12:30:52 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:52.532 12:30:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.532 12:30:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.532 12:30:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.532 ************************************ 00:04:52.532 START TEST exit_on_failed_rpc_init 00:04:52.532 ************************************ 00:04:52.532 12:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:52.532 12:30:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59368 00:04:52.532 12:30:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59368 00:04:52.532 12:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59368 ']' 00:04:52.532 12:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.532 12:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.532 12:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.532 12:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.532 12:30:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:52.532 12:30:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.790 [2024-12-14 12:30:52.277431] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:52.790 [2024-12-14 12:30:52.277547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59368 ] 00:04:52.790 [2024-12-14 12:30:52.433119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.790 [2024-12-14 12:30:52.512558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:53.725 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:53.725 [2024-12-14 12:30:53.180392] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:53.725 [2024-12-14 12:30:53.180623] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59386 ] 00:04:53.725 [2024-12-14 12:30:53.334203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.725 [2024-12-14 12:30:53.426552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.725 [2024-12-14 12:30:53.426622] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:53.725 [2024-12-14 12:30:53.426635] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:53.725 [2024-12-14 12:30:53.426648] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59368 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59368 ']' 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59368 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59368 00:04:53.984 killing process with pid 59368 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59368' 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59368 00:04:53.984 12:30:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59368 00:04:55.363 ************************************ 00:04:55.363 END TEST exit_on_failed_rpc_init 00:04:55.363 ************************************ 00:04:55.363 00:04:55.363 real 0m2.578s 00:04:55.363 user 0m2.878s 00:04:55.363 sys 0m0.385s 00:04:55.363 12:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.363 12:30:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.363 12:30:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:55.363 00:04:55.363 real 0m17.675s 00:04:55.363 user 0m17.062s 00:04:55.363 sys 0m1.418s 00:04:55.363 12:30:54 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.363 12:30:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.363 ************************************ 00:04:55.363 END TEST skip_rpc 00:04:55.363 ************************************ 00:04:55.363 12:30:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:55.363 12:30:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.363 12:30:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.363 12:30:54 -- common/autotest_common.sh@10 -- # set +x 00:04:55.363 ************************************ 00:04:55.363 START TEST rpc_client 00:04:55.363 ************************************ 00:04:55.363 12:30:54 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:55.363 * Looking for test storage... 00:04:55.363 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:55.363 12:30:54 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.363 12:30:54 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.363 12:30:54 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.363 12:30:54 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.363 12:30:54 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:55.363 12:30:54 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.363 12:30:55 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.363 --rc genhtml_branch_coverage=1 00:04:55.363 --rc genhtml_function_coverage=1 00:04:55.363 --rc genhtml_legend=1 00:04:55.363 --rc geninfo_all_blocks=1 00:04:55.363 --rc geninfo_unexecuted_blocks=1 00:04:55.363 00:04:55.363 ' 00:04:55.363 12:30:55 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.363 --rc genhtml_branch_coverage=1 00:04:55.363 --rc genhtml_function_coverage=1 00:04:55.363 --rc genhtml_legend=1 00:04:55.363 --rc geninfo_all_blocks=1 00:04:55.363 --rc geninfo_unexecuted_blocks=1 00:04:55.363 00:04:55.363 ' 00:04:55.363 12:30:55 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.363 --rc genhtml_branch_coverage=1 00:04:55.363 --rc genhtml_function_coverage=1 00:04:55.363 --rc genhtml_legend=1 00:04:55.363 --rc geninfo_all_blocks=1 00:04:55.363 --rc geninfo_unexecuted_blocks=1 00:04:55.363 00:04:55.363 ' 00:04:55.363 12:30:55 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.363 --rc genhtml_branch_coverage=1 00:04:55.363 --rc genhtml_function_coverage=1 00:04:55.363 --rc genhtml_legend=1 00:04:55.363 --rc geninfo_all_blocks=1 00:04:55.363 --rc geninfo_unexecuted_blocks=1 00:04:55.363 00:04:55.363 ' 00:04:55.363 12:30:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:55.363 OK 00:04:55.363 12:30:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:55.363 ************************************ 00:04:55.363 END TEST rpc_client 00:04:55.363 ************************************ 00:04:55.363 00:04:55.363 real 0m0.194s 00:04:55.363 user 0m0.106s 00:04:55.363 sys 0m0.093s 00:04:55.363 12:30:55 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.363 12:30:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:55.363 12:30:55 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:55.363 12:30:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.363 12:30:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.363 12:30:55 -- common/autotest_common.sh@10 -- # set +x 00:04:55.363 ************************************ 00:04:55.363 START TEST json_config 00:04:55.363 ************************************ 00:04:55.363 12:30:55 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:55.644 12:30:55 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.644 12:30:55 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.644 12:30:55 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.644 12:30:55 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.644 12:30:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.644 12:30:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.644 12:30:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.644 12:30:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.644 12:30:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.644 12:30:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.644 12:30:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.644 12:30:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.644 12:30:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.644 12:30:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.644 12:30:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.644 12:30:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:55.644 12:30:55 json_config -- scripts/common.sh@345 -- # : 1 00:04:55.644 12:30:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.644 12:30:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.644 12:30:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:55.644 12:30:55 json_config -- scripts/common.sh@353 -- # local d=1 00:04:55.644 12:30:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.644 12:30:55 json_config -- scripts/common.sh@355 -- # echo 1 00:04:55.644 12:30:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.644 12:30:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:55.644 12:30:55 json_config -- scripts/common.sh@353 -- # local d=2 00:04:55.644 12:30:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.644 12:30:55 json_config -- scripts/common.sh@355 -- # echo 2 00:04:55.644 12:30:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.644 12:30:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.644 12:30:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.644 12:30:55 json_config -- scripts/common.sh@368 -- # return 0 00:04:55.644 12:30:55 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.644 12:30:55 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.644 --rc genhtml_branch_coverage=1 00:04:55.644 --rc genhtml_function_coverage=1 00:04:55.644 --rc genhtml_legend=1 00:04:55.644 --rc geninfo_all_blocks=1 00:04:55.644 --rc geninfo_unexecuted_blocks=1 00:04:55.644 00:04:55.644 ' 00:04:55.644 12:30:55 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.644 --rc genhtml_branch_coverage=1 00:04:55.644 --rc genhtml_function_coverage=1 00:04:55.644 --rc genhtml_legend=1 00:04:55.644 --rc geninfo_all_blocks=1 00:04:55.644 --rc geninfo_unexecuted_blocks=1 00:04:55.644 00:04:55.644 ' 00:04:55.644 12:30:55 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.644 --rc genhtml_branch_coverage=1 00:04:55.644 --rc genhtml_function_coverage=1 00:04:55.644 --rc genhtml_legend=1 00:04:55.644 --rc geninfo_all_blocks=1 00:04:55.644 --rc geninfo_unexecuted_blocks=1 00:04:55.644 00:04:55.644 ' 00:04:55.644 12:30:55 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.644 --rc genhtml_branch_coverage=1 00:04:55.644 --rc genhtml_function_coverage=1 00:04:55.644 --rc genhtml_legend=1 00:04:55.644 --rc geninfo_all_blocks=1 00:04:55.644 --rc geninfo_unexecuted_blocks=1 00:04:55.644 00:04:55.644 ' 00:04:55.644 12:30:55 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78be8a9e-58b2-4e5c-9711-0955207b4fd9 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=78be8a9e-58b2-4e5c-9711-0955207b4fd9 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:55.644 12:30:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.644 12:30:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.644 12:30:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.644 12:30:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.644 12:30:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.644 12:30:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.644 12:30:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.644 12:30:55 json_config -- paths/export.sh@5 -- # export PATH 00:04:55.644 12:30:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@51 -- # : 0 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.644 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.644 12:30:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:55.644 12:30:55 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:55.644 12:30:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:55.644 12:30:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:55.644 12:30:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:55.644 12:30:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:55.644 12:30:55 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:55.644 WARNING: No tests are enabled so not running JSON configuration tests 00:04:55.645 12:30:55 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:55.645 00:04:55.645 real 0m0.142s 00:04:55.645 user 0m0.089s 00:04:55.645 sys 0m0.054s 00:04:55.645 12:30:55 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.645 12:30:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.645 ************************************ 00:04:55.645 END TEST json_config 00:04:55.645 ************************************ 00:04:55.645 12:30:55 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:55.645 12:30:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.645 12:30:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.645 12:30:55 -- common/autotest_common.sh@10 -- # set +x 00:04:55.645 ************************************ 00:04:55.645 START TEST json_config_extra_key 00:04:55.645 ************************************ 00:04:55.645 12:30:55 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:55.645 12:30:55 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.645 12:30:55 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.645 12:30:55 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.645 12:30:55 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.645 12:30:55 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:55.904 12:30:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:55.904 12:30:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.904 12:30:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:55.904 12:30:55 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.904 12:30:55 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.904 12:30:55 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.904 12:30:55 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:55.904 12:30:55 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.904 12:30:55 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.904 --rc genhtml_branch_coverage=1 00:04:55.904 --rc genhtml_function_coverage=1 00:04:55.904 --rc genhtml_legend=1 00:04:55.904 --rc geninfo_all_blocks=1 00:04:55.904 --rc geninfo_unexecuted_blocks=1 00:04:55.904 00:04:55.904 ' 00:04:55.904 12:30:55 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.904 --rc genhtml_branch_coverage=1 00:04:55.904 --rc genhtml_function_coverage=1 00:04:55.904 --rc genhtml_legend=1 00:04:55.904 --rc geninfo_all_blocks=1 00:04:55.904 --rc geninfo_unexecuted_blocks=1 00:04:55.904 00:04:55.904 ' 00:04:55.904 12:30:55 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.904 --rc genhtml_branch_coverage=1 00:04:55.904 --rc genhtml_function_coverage=1 00:04:55.904 --rc genhtml_legend=1 00:04:55.904 --rc geninfo_all_blocks=1 00:04:55.904 --rc geninfo_unexecuted_blocks=1 00:04:55.904 00:04:55.904 ' 00:04:55.904 12:30:55 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.904 --rc genhtml_branch_coverage=1 00:04:55.904 --rc genhtml_function_coverage=1 00:04:55.904 --rc genhtml_legend=1 00:04:55.904 --rc geninfo_all_blocks=1 00:04:55.904 --rc geninfo_unexecuted_blocks=1 00:04:55.904 00:04:55.904 ' 00:04:55.904 12:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:78be8a9e-58b2-4e5c-9711-0955207b4fd9 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=78be8a9e-58b2-4e5c-9711-0955207b4fd9 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:55.904 12:30:55 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:55.904 12:30:55 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.904 12:30:55 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.904 12:30:55 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.904 12:30:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.904 12:30:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.904 12:30:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.904 12:30:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:55.904 12:30:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:55.904 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:55.904 12:30:55 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:55.904 12:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:55.904 12:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:55.904 12:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:55.904 12:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:55.904 12:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:55.904 12:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:55.904 12:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:55.904 12:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:55.904 12:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:55.904 12:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.904 12:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:55.904 INFO: launching applications... 00:04:55.904 12:30:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:55.904 12:30:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:55.904 12:30:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:55.904 12:30:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:55.904 12:30:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:55.904 12:30:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:55.904 12:30:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.904 12:30:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:55.904 12:30:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59574 00:04:55.904 12:30:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:55.904 Waiting for target to run... 00:04:55.904 12:30:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59574 /var/tmp/spdk_tgt.sock 00:04:55.904 12:30:55 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59574 ']' 00:04:55.904 12:30:55 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.904 12:30:55 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.904 12:30:55 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.904 12:30:55 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:55.904 12:30:55 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.904 12:30:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:55.904 [2024-12-14 12:30:55.456840] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:55.904 [2024-12-14 12:30:55.457101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59574 ] 00:04:56.165 [2024-12-14 12:30:55.752891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.165 [2024-12-14 12:30:55.827651] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.731 12:30:56 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.731 12:30:56 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:56.731 00:04:56.731 12:30:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:56.731 12:30:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:56.731 INFO: shutting down applications... 00:04:56.731 12:30:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:56.731 12:30:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:56.731 12:30:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:56.731 12:30:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59574 ]] 00:04:56.731 12:30:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59574 00:04:56.731 12:30:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:56.731 12:30:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.731 12:30:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59574 00:04:56.731 12:30:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:57.297 12:30:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:57.297 12:30:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.297 12:30:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59574 00:04:57.297 12:30:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:57.555 12:30:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:57.555 12:30:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:57.555 12:30:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59574 00:04:57.555 12:30:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:58.121 12:30:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:58.121 SPDK target shutdown done 00:04:58.121 Success 00:04:58.121 12:30:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.121 12:30:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59574 00:04:58.121 12:30:57 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:58.121 12:30:57 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:58.121 12:30:57 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:58.121 12:30:57 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:58.121 12:30:57 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:58.121 00:04:58.121 real 0m2.502s 00:04:58.121 user 0m2.182s 00:04:58.121 sys 0m0.382s 00:04:58.121 ************************************ 00:04:58.121 END TEST json_config_extra_key 00:04:58.121 ************************************ 00:04:58.121 12:30:57 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.121 12:30:57 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:58.121 12:30:57 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:58.121 12:30:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.121 12:30:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.121 12:30:57 -- common/autotest_common.sh@10 -- # set +x 00:04:58.121 ************************************ 00:04:58.121 START TEST alias_rpc 00:04:58.121 ************************************ 00:04:58.121 12:30:57 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:58.379 * Looking for test storage... 00:04:58.379 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:58.379 12:30:57 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.379 12:30:57 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.379 12:30:57 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.379 12:30:57 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:58.379 12:30:57 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.380 12:30:57 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:58.380 12:30:57 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:58.380 12:30:57 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.380 12:30:57 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:58.380 12:30:57 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.380 12:30:57 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.380 12:30:57 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.380 12:30:57 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:58.380 12:30:57 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.380 12:30:57 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.380 --rc genhtml_branch_coverage=1 00:04:58.380 --rc genhtml_function_coverage=1 00:04:58.380 --rc genhtml_legend=1 00:04:58.380 --rc geninfo_all_blocks=1 00:04:58.380 --rc geninfo_unexecuted_blocks=1 00:04:58.380 00:04:58.380 ' 00:04:58.380 12:30:57 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.380 --rc genhtml_branch_coverage=1 00:04:58.380 --rc genhtml_function_coverage=1 00:04:58.380 --rc genhtml_legend=1 00:04:58.380 --rc geninfo_all_blocks=1 00:04:58.380 --rc geninfo_unexecuted_blocks=1 00:04:58.380 00:04:58.380 ' 00:04:58.380 12:30:57 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.380 --rc genhtml_branch_coverage=1 00:04:58.380 --rc genhtml_function_coverage=1 00:04:58.380 --rc genhtml_legend=1 00:04:58.380 --rc geninfo_all_blocks=1 00:04:58.380 --rc geninfo_unexecuted_blocks=1 00:04:58.380 00:04:58.380 ' 00:04:58.380 12:30:57 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.380 --rc genhtml_branch_coverage=1 00:04:58.380 --rc genhtml_function_coverage=1 00:04:58.380 --rc genhtml_legend=1 00:04:58.380 --rc geninfo_all_blocks=1 00:04:58.380 --rc geninfo_unexecuted_blocks=1 00:04:58.380 00:04:58.380 ' 00:04:58.380 12:30:57 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:58.380 12:30:57 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59661 00:04:58.380 12:30:57 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59661 00:04:58.380 12:30:57 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59661 ']' 00:04:58.380 12:30:57 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.380 12:30:57 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.380 12:30:57 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:58.380 12:30:57 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.380 12:30:57 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.380 12:30:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.380 [2024-12-14 12:30:58.019640] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:58.380 [2024-12-14 12:30:58.019942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59661 ] 00:04:58.638 [2024-12-14 12:30:58.177363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.638 [2024-12-14 12:30:58.257015] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.207 12:30:58 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.207 12:30:58 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:59.207 12:30:58 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:59.465 12:30:59 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59661 00:04:59.465 12:30:59 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59661 ']' 00:04:59.465 12:30:59 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59661 00:04:59.465 12:30:59 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:59.465 12:30:59 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.465 12:30:59 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59661 00:04:59.465 killing process with pid 59661 00:04:59.465 12:30:59 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.465 12:30:59 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.465 12:30:59 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59661' 00:04:59.465 12:30:59 alias_rpc -- common/autotest_common.sh@973 -- # kill 59661 00:04:59.465 12:30:59 alias_rpc -- common/autotest_common.sh@978 -- # wait 59661 00:05:00.843 ************************************ 00:05:00.843 END TEST alias_rpc 00:05:00.843 ************************************ 00:05:00.843 00:05:00.843 real 0m2.473s 00:05:00.843 user 0m2.586s 00:05:00.843 sys 0m0.389s 00:05:00.843 12:31:00 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.843 12:31:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.843 12:31:00 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:00.843 12:31:00 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:00.843 12:31:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.843 12:31:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.843 12:31:00 -- common/autotest_common.sh@10 -- # set +x 00:05:00.843 ************************************ 00:05:00.843 START TEST spdkcli_tcp 00:05:00.843 ************************************ 00:05:00.843 12:31:00 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:00.843 * Looking for test storage... 00:05:00.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:00.843 12:31:00 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.843 12:31:00 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.843 12:31:00 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.843 12:31:00 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.843 12:31:00 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:00.843 12:31:00 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.843 12:31:00 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.843 --rc genhtml_branch_coverage=1 00:05:00.843 --rc genhtml_function_coverage=1 00:05:00.843 --rc genhtml_legend=1 00:05:00.843 --rc geninfo_all_blocks=1 00:05:00.843 --rc geninfo_unexecuted_blocks=1 00:05:00.843 00:05:00.843 ' 00:05:00.843 12:31:00 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.843 --rc genhtml_branch_coverage=1 00:05:00.843 --rc genhtml_function_coverage=1 00:05:00.843 --rc genhtml_legend=1 00:05:00.843 --rc geninfo_all_blocks=1 00:05:00.843 --rc geninfo_unexecuted_blocks=1 00:05:00.843 00:05:00.843 ' 00:05:00.843 12:31:00 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.843 --rc genhtml_branch_coverage=1 00:05:00.843 --rc genhtml_function_coverage=1 00:05:00.843 --rc genhtml_legend=1 00:05:00.843 --rc geninfo_all_blocks=1 00:05:00.843 --rc geninfo_unexecuted_blocks=1 00:05:00.843 00:05:00.843 ' 00:05:00.843 12:31:00 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.843 --rc genhtml_branch_coverage=1 00:05:00.843 --rc genhtml_function_coverage=1 00:05:00.843 --rc genhtml_legend=1 00:05:00.843 --rc geninfo_all_blocks=1 00:05:00.843 --rc geninfo_unexecuted_blocks=1 00:05:00.843 00:05:00.843 ' 00:05:00.843 12:31:00 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:00.843 12:31:00 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:00.843 12:31:00 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:00.843 12:31:00 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:00.843 12:31:00 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:00.843 12:31:00 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:00.843 12:31:00 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:00.843 12:31:00 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.843 12:31:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.843 12:31:00 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59751 00:05:00.843 12:31:00 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59751 00:05:00.843 12:31:00 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59751 ']' 00:05:00.843 12:31:00 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.844 12:31:00 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.844 12:31:00 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.844 12:31:00 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.844 12:31:00 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.844 12:31:00 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:00.844 [2024-12-14 12:31:00.526605] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:00.844 [2024-12-14 12:31:00.527431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59751 ] 00:05:01.103 [2024-12-14 12:31:00.684298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.104 [2024-12-14 12:31:00.768414] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.104 [2024-12-14 12:31:00.768506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.674 12:31:01 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.674 12:31:01 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:01.674 12:31:01 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:01.674 12:31:01 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59768 00:05:01.674 12:31:01 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:01.936 [ 00:05:01.936 "bdev_malloc_delete", 00:05:01.936 "bdev_malloc_create", 00:05:01.936 "bdev_null_resize", 00:05:01.936 "bdev_null_delete", 00:05:01.936 "bdev_null_create", 00:05:01.936 "bdev_nvme_cuse_unregister", 00:05:01.936 "bdev_nvme_cuse_register", 00:05:01.936 "bdev_opal_new_user", 00:05:01.936 "bdev_opal_set_lock_state", 00:05:01.936 "bdev_opal_delete", 00:05:01.936 "bdev_opal_get_info", 00:05:01.936 "bdev_opal_create", 00:05:01.936 "bdev_nvme_opal_revert", 00:05:01.936 "bdev_nvme_opal_init", 00:05:01.936 "bdev_nvme_send_cmd", 00:05:01.936 "bdev_nvme_set_keys", 00:05:01.936 "bdev_nvme_get_path_iostat", 00:05:01.936 "bdev_nvme_get_mdns_discovery_info", 00:05:01.936 "bdev_nvme_stop_mdns_discovery", 00:05:01.936 "bdev_nvme_start_mdns_discovery", 00:05:01.936 "bdev_nvme_set_multipath_policy", 00:05:01.936 "bdev_nvme_set_preferred_path", 00:05:01.936 "bdev_nvme_get_io_paths", 00:05:01.936 "bdev_nvme_remove_error_injection", 00:05:01.936 "bdev_nvme_add_error_injection", 00:05:01.936 "bdev_nvme_get_discovery_info", 00:05:01.936 "bdev_nvme_stop_discovery", 00:05:01.936 "bdev_nvme_start_discovery", 00:05:01.936 "bdev_nvme_get_controller_health_info", 00:05:01.936 "bdev_nvme_disable_controller", 00:05:01.936 "bdev_nvme_enable_controller", 00:05:01.936 "bdev_nvme_reset_controller", 00:05:01.936 "bdev_nvme_get_transport_statistics", 00:05:01.936 "bdev_nvme_apply_firmware", 00:05:01.936 "bdev_nvme_detach_controller", 00:05:01.936 "bdev_nvme_get_controllers", 00:05:01.936 "bdev_nvme_attach_controller", 00:05:01.936 "bdev_nvme_set_hotplug", 00:05:01.936 "bdev_nvme_set_options", 00:05:01.936 "bdev_passthru_delete", 00:05:01.936 "bdev_passthru_create", 00:05:01.936 "bdev_lvol_set_parent_bdev", 00:05:01.936 "bdev_lvol_set_parent", 00:05:01.936 "bdev_lvol_check_shallow_copy", 00:05:01.936 "bdev_lvol_start_shallow_copy", 00:05:01.936 "bdev_lvol_grow_lvstore", 00:05:01.936 "bdev_lvol_get_lvols", 00:05:01.936 "bdev_lvol_get_lvstores", 00:05:01.936 "bdev_lvol_delete", 00:05:01.936 "bdev_lvol_set_read_only", 00:05:01.936 "bdev_lvol_resize", 00:05:01.936 "bdev_lvol_decouple_parent", 00:05:01.936 "bdev_lvol_inflate", 00:05:01.936 "bdev_lvol_rename", 00:05:01.936 "bdev_lvol_clone_bdev", 00:05:01.936 "bdev_lvol_clone", 00:05:01.936 "bdev_lvol_snapshot", 00:05:01.936 "bdev_lvol_create", 00:05:01.936 "bdev_lvol_delete_lvstore", 00:05:01.936 "bdev_lvol_rename_lvstore", 00:05:01.936 "bdev_lvol_create_lvstore", 00:05:01.936 "bdev_raid_set_options", 00:05:01.936 "bdev_raid_remove_base_bdev", 00:05:01.936 "bdev_raid_add_base_bdev", 00:05:01.936 "bdev_raid_delete", 00:05:01.936 "bdev_raid_create", 00:05:01.936 "bdev_raid_get_bdevs", 00:05:01.936 "bdev_error_inject_error", 00:05:01.936 "bdev_error_delete", 00:05:01.936 "bdev_error_create", 00:05:01.936 "bdev_split_delete", 00:05:01.936 "bdev_split_create", 00:05:01.936 "bdev_delay_delete", 00:05:01.936 "bdev_delay_create", 00:05:01.936 "bdev_delay_update_latency", 00:05:01.936 "bdev_zone_block_delete", 00:05:01.936 "bdev_zone_block_create", 00:05:01.936 "blobfs_create", 00:05:01.936 "blobfs_detect", 00:05:01.936 "blobfs_set_cache_size", 00:05:01.936 "bdev_xnvme_delete", 00:05:01.936 "bdev_xnvme_create", 00:05:01.936 "bdev_aio_delete", 00:05:01.936 "bdev_aio_rescan", 00:05:01.936 "bdev_aio_create", 00:05:01.936 "bdev_ftl_set_property", 00:05:01.936 "bdev_ftl_get_properties", 00:05:01.936 "bdev_ftl_get_stats", 00:05:01.936 "bdev_ftl_unmap", 00:05:01.936 "bdev_ftl_unload", 00:05:01.936 "bdev_ftl_delete", 00:05:01.936 "bdev_ftl_load", 00:05:01.936 "bdev_ftl_create", 00:05:01.936 "bdev_virtio_attach_controller", 00:05:01.936 "bdev_virtio_scsi_get_devices", 00:05:01.936 "bdev_virtio_detach_controller", 00:05:01.936 "bdev_virtio_blk_set_hotplug", 00:05:01.936 "bdev_iscsi_delete", 00:05:01.936 "bdev_iscsi_create", 00:05:01.936 "bdev_iscsi_set_options", 00:05:01.936 "accel_error_inject_error", 00:05:01.936 "ioat_scan_accel_module", 00:05:01.936 "dsa_scan_accel_module", 00:05:01.936 "iaa_scan_accel_module", 00:05:01.936 "keyring_file_remove_key", 00:05:01.936 "keyring_file_add_key", 00:05:01.936 "keyring_linux_set_options", 00:05:01.936 "fsdev_aio_delete", 00:05:01.936 "fsdev_aio_create", 00:05:01.936 "iscsi_get_histogram", 00:05:01.936 "iscsi_enable_histogram", 00:05:01.936 "iscsi_set_options", 00:05:01.936 "iscsi_get_auth_groups", 00:05:01.936 "iscsi_auth_group_remove_secret", 00:05:01.936 "iscsi_auth_group_add_secret", 00:05:01.936 "iscsi_delete_auth_group", 00:05:01.936 "iscsi_create_auth_group", 00:05:01.936 "iscsi_set_discovery_auth", 00:05:01.936 "iscsi_get_options", 00:05:01.936 "iscsi_target_node_request_logout", 00:05:01.936 "iscsi_target_node_set_redirect", 00:05:01.936 "iscsi_target_node_set_auth", 00:05:01.936 "iscsi_target_node_add_lun", 00:05:01.936 "iscsi_get_stats", 00:05:01.936 "iscsi_get_connections", 00:05:01.936 "iscsi_portal_group_set_auth", 00:05:01.937 "iscsi_start_portal_group", 00:05:01.937 "iscsi_delete_portal_group", 00:05:01.937 "iscsi_create_portal_group", 00:05:01.937 "iscsi_get_portal_groups", 00:05:01.937 "iscsi_delete_target_node", 00:05:01.937 "iscsi_target_node_remove_pg_ig_maps", 00:05:01.937 "iscsi_target_node_add_pg_ig_maps", 00:05:01.937 "iscsi_create_target_node", 00:05:01.937 "iscsi_get_target_nodes", 00:05:01.937 "iscsi_delete_initiator_group", 00:05:01.937 "iscsi_initiator_group_remove_initiators", 00:05:01.937 "iscsi_initiator_group_add_initiators", 00:05:01.937 "iscsi_create_initiator_group", 00:05:01.937 "iscsi_get_initiator_groups", 00:05:01.937 "nvmf_set_crdt", 00:05:01.937 "nvmf_set_config", 00:05:01.937 "nvmf_set_max_subsystems", 00:05:01.937 "nvmf_stop_mdns_prr", 00:05:01.937 "nvmf_publish_mdns_prr", 00:05:01.937 "nvmf_subsystem_get_listeners", 00:05:01.937 "nvmf_subsystem_get_qpairs", 00:05:01.937 "nvmf_subsystem_get_controllers", 00:05:01.937 "nvmf_get_stats", 00:05:01.937 "nvmf_get_transports", 00:05:01.937 "nvmf_create_transport", 00:05:01.937 "nvmf_get_targets", 00:05:01.937 "nvmf_delete_target", 00:05:01.937 "nvmf_create_target", 00:05:01.937 "nvmf_subsystem_allow_any_host", 00:05:01.937 "nvmf_subsystem_set_keys", 00:05:01.937 "nvmf_subsystem_remove_host", 00:05:01.937 "nvmf_subsystem_add_host", 00:05:01.937 "nvmf_ns_remove_host", 00:05:01.937 "nvmf_ns_add_host", 00:05:01.937 "nvmf_subsystem_remove_ns", 00:05:01.937 "nvmf_subsystem_set_ns_ana_group", 00:05:01.937 "nvmf_subsystem_add_ns", 00:05:01.937 "nvmf_subsystem_listener_set_ana_state", 00:05:01.937 "nvmf_discovery_get_referrals", 00:05:01.937 "nvmf_discovery_remove_referral", 00:05:01.937 "nvmf_discovery_add_referral", 00:05:01.937 "nvmf_subsystem_remove_listener", 00:05:01.937 "nvmf_subsystem_add_listener", 00:05:01.937 "nvmf_delete_subsystem", 00:05:01.937 "nvmf_create_subsystem", 00:05:01.937 "nvmf_get_subsystems", 00:05:01.937 "env_dpdk_get_mem_stats", 00:05:01.937 "nbd_get_disks", 00:05:01.937 "nbd_stop_disk", 00:05:01.937 "nbd_start_disk", 00:05:01.937 "ublk_recover_disk", 00:05:01.937 "ublk_get_disks", 00:05:01.937 "ublk_stop_disk", 00:05:01.937 "ublk_start_disk", 00:05:01.937 "ublk_destroy_target", 00:05:01.937 "ublk_create_target", 00:05:01.937 "virtio_blk_create_transport", 00:05:01.937 "virtio_blk_get_transports", 00:05:01.937 "vhost_controller_set_coalescing", 00:05:01.937 "vhost_get_controllers", 00:05:01.937 "vhost_delete_controller", 00:05:01.937 "vhost_create_blk_controller", 00:05:01.937 "vhost_scsi_controller_remove_target", 00:05:01.937 "vhost_scsi_controller_add_target", 00:05:01.937 "vhost_start_scsi_controller", 00:05:01.937 "vhost_create_scsi_controller", 00:05:01.937 "thread_set_cpumask", 00:05:01.937 "scheduler_set_options", 00:05:01.937 "framework_get_governor", 00:05:01.937 "framework_get_scheduler", 00:05:01.937 "framework_set_scheduler", 00:05:01.937 "framework_get_reactors", 00:05:01.937 "thread_get_io_channels", 00:05:01.937 "thread_get_pollers", 00:05:01.937 "thread_get_stats", 00:05:01.937 "framework_monitor_context_switch", 00:05:01.937 "spdk_kill_instance", 00:05:01.937 "log_enable_timestamps", 00:05:01.937 "log_get_flags", 00:05:01.937 "log_clear_flag", 00:05:01.937 "log_set_flag", 00:05:01.937 "log_get_level", 00:05:01.937 "log_set_level", 00:05:01.937 "log_get_print_level", 00:05:01.937 "log_set_print_level", 00:05:01.937 "framework_enable_cpumask_locks", 00:05:01.937 "framework_disable_cpumask_locks", 00:05:01.937 "framework_wait_init", 00:05:01.937 "framework_start_init", 00:05:01.937 "scsi_get_devices", 00:05:01.937 "bdev_get_histogram", 00:05:01.937 "bdev_enable_histogram", 00:05:01.937 "bdev_set_qos_limit", 00:05:01.937 "bdev_set_qd_sampling_period", 00:05:01.937 "bdev_get_bdevs", 00:05:01.937 "bdev_reset_iostat", 00:05:01.937 "bdev_get_iostat", 00:05:01.937 "bdev_examine", 00:05:01.937 "bdev_wait_for_examine", 00:05:01.937 "bdev_set_options", 00:05:01.937 "accel_get_stats", 00:05:01.937 "accel_set_options", 00:05:01.937 "accel_set_driver", 00:05:01.937 "accel_crypto_key_destroy", 00:05:01.937 "accel_crypto_keys_get", 00:05:01.937 "accel_crypto_key_create", 00:05:01.937 "accel_assign_opc", 00:05:01.937 "accel_get_module_info", 00:05:01.937 "accel_get_opc_assignments", 00:05:01.937 "vmd_rescan", 00:05:01.937 "vmd_remove_device", 00:05:01.937 "vmd_enable", 00:05:01.937 "sock_get_default_impl", 00:05:01.937 "sock_set_default_impl", 00:05:01.937 "sock_impl_set_options", 00:05:01.937 "sock_impl_get_options", 00:05:01.937 "iobuf_get_stats", 00:05:01.937 "iobuf_set_options", 00:05:01.937 "keyring_get_keys", 00:05:01.937 "framework_get_pci_devices", 00:05:01.937 "framework_get_config", 00:05:01.937 "framework_get_subsystems", 00:05:01.937 "fsdev_set_opts", 00:05:01.937 "fsdev_get_opts", 00:05:01.937 "trace_get_info", 00:05:01.937 "trace_get_tpoint_group_mask", 00:05:01.937 "trace_disable_tpoint_group", 00:05:01.937 "trace_enable_tpoint_group", 00:05:01.937 "trace_clear_tpoint_mask", 00:05:01.937 "trace_set_tpoint_mask", 00:05:01.937 "notify_get_notifications", 00:05:01.937 "notify_get_types", 00:05:01.937 "spdk_get_version", 00:05:01.937 "rpc_get_methods" 00:05:01.937 ] 00:05:01.937 12:31:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:01.937 12:31:01 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:01.937 12:31:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:01.937 12:31:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:01.937 12:31:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59751 00:05:01.937 12:31:01 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59751 ']' 00:05:01.937 12:31:01 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59751 00:05:01.937 12:31:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:01.937 12:31:01 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.937 12:31:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59751 00:05:01.937 12:31:01 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.937 12:31:01 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.937 12:31:01 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59751' 00:05:01.937 killing process with pid 59751 00:05:01.937 12:31:01 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59751 00:05:01.937 12:31:01 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59751 00:05:03.393 00:05:03.393 real 0m2.456s 00:05:03.393 user 0m4.401s 00:05:03.393 sys 0m0.421s 00:05:03.393 12:31:02 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.393 12:31:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:03.393 ************************************ 00:05:03.393 END TEST spdkcli_tcp 00:05:03.393 ************************************ 00:05:03.393 12:31:02 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.393 12:31:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.393 12:31:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.393 12:31:02 -- common/autotest_common.sh@10 -- # set +x 00:05:03.393 ************************************ 00:05:03.393 START TEST dpdk_mem_utility 00:05:03.393 ************************************ 00:05:03.393 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.393 * Looking for test storage... 00:05:03.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:03.393 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.393 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.393 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.393 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:03.393 12:31:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:03.394 12:31:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.394 12:31:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:03.394 12:31:02 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.394 12:31:02 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:03.394 12:31:02 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:03.394 12:31:02 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.394 12:31:02 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:03.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.394 12:31:02 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.394 12:31:02 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.394 12:31:02 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.394 12:31:02 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:03.394 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.394 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.394 --rc genhtml_branch_coverage=1 00:05:03.394 --rc genhtml_function_coverage=1 00:05:03.394 --rc genhtml_legend=1 00:05:03.394 --rc geninfo_all_blocks=1 00:05:03.394 --rc geninfo_unexecuted_blocks=1 00:05:03.394 00:05:03.394 ' 00:05:03.394 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.394 --rc genhtml_branch_coverage=1 00:05:03.394 --rc genhtml_function_coverage=1 00:05:03.394 --rc genhtml_legend=1 00:05:03.394 --rc geninfo_all_blocks=1 00:05:03.394 --rc geninfo_unexecuted_blocks=1 00:05:03.394 00:05:03.394 ' 00:05:03.394 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.394 --rc genhtml_branch_coverage=1 00:05:03.394 --rc genhtml_function_coverage=1 00:05:03.394 --rc genhtml_legend=1 00:05:03.394 --rc geninfo_all_blocks=1 00:05:03.394 --rc geninfo_unexecuted_blocks=1 00:05:03.394 00:05:03.394 ' 00:05:03.394 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.394 --rc genhtml_branch_coverage=1 00:05:03.394 --rc genhtml_function_coverage=1 00:05:03.394 --rc genhtml_legend=1 00:05:03.394 --rc geninfo_all_blocks=1 00:05:03.394 --rc geninfo_unexecuted_blocks=1 00:05:03.394 00:05:03.394 ' 00:05:03.394 12:31:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:03.394 12:31:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59857 00:05:03.394 12:31:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59857 00:05:03.394 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59857 ']' 00:05:03.394 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.394 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.394 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.394 12:31:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.394 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.394 12:31:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:03.394 [2024-12-14 12:31:03.019386] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:03.394 [2024-12-14 12:31:03.019498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59857 ] 00:05:03.668 [2024-12-14 12:31:03.177589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.668 [2024-12-14 12:31:03.274386] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.236 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.236 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:04.236 12:31:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:04.236 12:31:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:04.236 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.236 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.236 { 00:05:04.236 "filename": "/tmp/spdk_mem_dump.txt" 00:05:04.236 } 00:05:04.236 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.236 12:31:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:04.236 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:04.236 1 heaps totaling size 824.000000 MiB 00:05:04.236 size: 824.000000 MiB heap id: 0 00:05:04.236 end heaps---------- 00:05:04.236 9 mempools totaling size 603.782043 MiB 00:05:04.236 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:04.236 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:04.236 size: 100.555481 MiB name: bdev_io_59857 00:05:04.236 size: 50.003479 MiB name: msgpool_59857 00:05:04.236 size: 36.509338 MiB name: fsdev_io_59857 00:05:04.236 size: 21.763794 MiB name: PDU_Pool 00:05:04.236 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:04.236 size: 4.133484 MiB name: evtpool_59857 00:05:04.236 size: 0.026123 MiB name: Session_Pool 00:05:04.236 end mempools------- 00:05:04.236 6 memzones totaling size 4.142822 MiB 00:05:04.236 size: 1.000366 MiB name: RG_ring_0_59857 00:05:04.236 size: 1.000366 MiB name: RG_ring_1_59857 00:05:04.236 size: 1.000366 MiB name: RG_ring_4_59857 00:05:04.236 size: 1.000366 MiB name: RG_ring_5_59857 00:05:04.236 size: 0.125366 MiB name: RG_ring_2_59857 00:05:04.236 size: 0.015991 MiB name: RG_ring_3_59857 00:05:04.236 end memzones------- 00:05:04.236 12:31:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:04.236 heap id: 0 total size: 824.000000 MiB number of busy elements: 320 number of free elements: 18 00:05:04.236 list of free elements. size: 16.780151 MiB 00:05:04.236 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:04.236 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:04.236 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:04.236 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:04.236 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:04.236 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:04.236 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:04.236 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:04.236 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:04.236 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:04.236 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:04.236 element at address: 0x20001b400000 with size: 0.559021 MiB 00:05:04.236 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:04.236 element at address: 0x200019600000 with size: 0.488464 MiB 00:05:04.236 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:04.236 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:04.236 element at address: 0x200028800000 with size: 0.391663 MiB 00:05:04.236 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:04.236 list of standard malloc elements. size: 199.288940 MiB 00:05:04.236 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:04.236 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:04.236 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:04.236 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:04.236 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:04.236 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:04.236 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:04.236 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:04.236 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:04.236 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:04.236 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:04.236 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:04.236 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:04.236 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:04.236 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:04.236 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:04.236 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:04.237 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48f1c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48f2c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:04.237 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:04.238 element at address: 0x200028864440 with size: 0.000244 MiB 00:05:04.238 element at address: 0x200028864540 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886b200 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:04.238 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:04.238 list of memzone associated elements. size: 607.930908 MiB 00:05:04.238 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:04.238 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:04.238 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:04.238 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:04.238 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:04.238 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59857_0 00:05:04.238 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:04.238 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59857_0 00:05:04.238 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:04.238 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59857_0 00:05:04.238 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:04.238 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:04.238 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:04.238 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:04.238 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:04.238 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59857_0 00:05:04.238 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:04.238 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59857 00:05:04.238 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:04.238 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59857 00:05:04.238 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:04.238 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:04.238 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:04.238 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:04.238 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:04.238 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:04.238 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:04.238 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:04.238 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:04.238 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59857 00:05:04.238 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:04.238 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59857 00:05:04.238 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:04.238 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59857 00:05:04.238 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:04.238 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59857 00:05:04.238 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:04.238 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59857 00:05:04.238 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:04.238 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59857 00:05:04.238 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:04.238 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:04.238 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:04.238 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:04.238 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:04.239 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:04.239 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:04.239 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59857 00:05:04.239 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:04.239 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59857 00:05:04.239 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:04.239 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:04.239 element at address: 0x200028864640 with size: 0.023804 MiB 00:05:04.239 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:04.239 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:04.239 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59857 00:05:04.239 element at address: 0x20002886a7c0 with size: 0.002502 MiB 00:05:04.239 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:04.239 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:04.239 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59857 00:05:04.239 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:04.239 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59857 00:05:04.239 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:04.239 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59857 00:05:04.239 element at address: 0x20002886b300 with size: 0.000366 MiB 00:05:04.239 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:04.239 12:31:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:04.497 12:31:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59857 00:05:04.497 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59857 ']' 00:05:04.497 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59857 00:05:04.497 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:04.497 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.497 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59857 00:05:04.497 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.497 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.497 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59857' 00:05:04.497 killing process with pid 59857 00:05:04.497 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59857 00:05:04.497 12:31:03 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59857 00:05:05.882 00:05:05.882 real 0m2.574s 00:05:05.882 user 0m2.598s 00:05:05.882 sys 0m0.396s 00:05:05.882 12:31:05 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.882 12:31:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:05.882 ************************************ 00:05:05.882 END TEST dpdk_mem_utility 00:05:05.882 ************************************ 00:05:05.882 12:31:05 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:05.882 12:31:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.882 12:31:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.882 12:31:05 -- common/autotest_common.sh@10 -- # set +x 00:05:05.882 ************************************ 00:05:05.882 START TEST event 00:05:05.882 ************************************ 00:05:05.882 12:31:05 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:05.882 * Looking for test storage... 00:05:05.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:05.882 12:31:05 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.882 12:31:05 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.882 12:31:05 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.882 12:31:05 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.882 12:31:05 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.882 12:31:05 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.882 12:31:05 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.882 12:31:05 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.882 12:31:05 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.882 12:31:05 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.882 12:31:05 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.882 12:31:05 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.882 12:31:05 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.882 12:31:05 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.882 12:31:05 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.882 12:31:05 event -- scripts/common.sh@344 -- # case "$op" in 00:05:05.882 12:31:05 event -- scripts/common.sh@345 -- # : 1 00:05:05.882 12:31:05 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.883 12:31:05 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.883 12:31:05 event -- scripts/common.sh@365 -- # decimal 1 00:05:05.883 12:31:05 event -- scripts/common.sh@353 -- # local d=1 00:05:05.883 12:31:05 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.883 12:31:05 event -- scripts/common.sh@355 -- # echo 1 00:05:05.883 12:31:05 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.883 12:31:05 event -- scripts/common.sh@366 -- # decimal 2 00:05:05.883 12:31:05 event -- scripts/common.sh@353 -- # local d=2 00:05:05.883 12:31:05 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.883 12:31:05 event -- scripts/common.sh@355 -- # echo 2 00:05:05.883 12:31:05 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.883 12:31:05 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.883 12:31:05 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.883 12:31:05 event -- scripts/common.sh@368 -- # return 0 00:05:05.883 12:31:05 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.883 12:31:05 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.883 --rc genhtml_branch_coverage=1 00:05:05.883 --rc genhtml_function_coverage=1 00:05:05.883 --rc genhtml_legend=1 00:05:05.883 --rc geninfo_all_blocks=1 00:05:05.883 --rc geninfo_unexecuted_blocks=1 00:05:05.883 00:05:05.883 ' 00:05:05.883 12:31:05 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.883 --rc genhtml_branch_coverage=1 00:05:05.883 --rc genhtml_function_coverage=1 00:05:05.883 --rc genhtml_legend=1 00:05:05.883 --rc geninfo_all_blocks=1 00:05:05.883 --rc geninfo_unexecuted_blocks=1 00:05:05.883 00:05:05.883 ' 00:05:05.883 12:31:05 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.883 --rc genhtml_branch_coverage=1 00:05:05.883 --rc genhtml_function_coverage=1 00:05:05.883 --rc genhtml_legend=1 00:05:05.883 --rc geninfo_all_blocks=1 00:05:05.883 --rc geninfo_unexecuted_blocks=1 00:05:05.883 00:05:05.883 ' 00:05:05.883 12:31:05 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.883 --rc genhtml_branch_coverage=1 00:05:05.883 --rc genhtml_function_coverage=1 00:05:05.883 --rc genhtml_legend=1 00:05:05.883 --rc geninfo_all_blocks=1 00:05:05.883 --rc geninfo_unexecuted_blocks=1 00:05:05.883 00:05:05.883 ' 00:05:05.883 12:31:05 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:05.883 12:31:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:05.883 12:31:05 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.883 12:31:05 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:05.883 12:31:05 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.883 12:31:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.883 ************************************ 00:05:05.883 START TEST event_perf 00:05:05.883 ************************************ 00:05:05.883 12:31:05 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:05.883 Running I/O for 1 seconds...[2024-12-14 12:31:05.598706] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:05.883 [2024-12-14 12:31:05.598897] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59948 ] 00:05:06.144 [2024-12-14 12:31:05.753402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.144 [2024-12-14 12:31:05.836723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.144 [2024-12-14 12:31:05.837119] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.144 [2024-12-14 12:31:05.837261] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.144 [2024-12-14 12:31:05.837281] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:07.523 Running I/O for 1 seconds... 00:05:07.523 lcore 0: 201258 00:05:07.523 lcore 1: 201261 00:05:07.523 lcore 2: 201256 00:05:07.523 lcore 3: 201255 00:05:07.523 done. 00:05:07.523 00:05:07.523 real 0m1.396s 00:05:07.523 ************************************ 00:05:07.523 END TEST event_perf 00:05:07.523 ************************************ 00:05:07.523 user 0m4.197s 00:05:07.523 sys 0m0.081s 00:05:07.523 12:31:06 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.523 12:31:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:07.523 12:31:06 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:07.523 12:31:06 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:07.523 12:31:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.523 12:31:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.523 ************************************ 00:05:07.523 START TEST event_reactor 00:05:07.523 ************************************ 00:05:07.523 12:31:07 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:07.523 [2024-12-14 12:31:07.036505] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:07.523 [2024-12-14 12:31:07.036588] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59988 ] 00:05:07.523 [2024-12-14 12:31:07.192413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.781 [2024-12-14 12:31:07.289228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.715 test_start 00:05:08.715 oneshot 00:05:08.715 tick 100 00:05:08.715 tick 100 00:05:08.715 tick 250 00:05:08.715 tick 100 00:05:08.715 tick 100 00:05:08.715 tick 100 00:05:08.715 tick 250 00:05:08.715 tick 500 00:05:08.715 tick 100 00:05:08.715 tick 100 00:05:08.715 tick 250 00:05:08.715 tick 100 00:05:08.715 tick 100 00:05:08.715 test_end 00:05:08.715 00:05:08.715 real 0m1.436s 00:05:08.715 user 0m1.258s 00:05:08.715 sys 0m0.069s 00:05:08.715 12:31:08 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.715 ************************************ 00:05:08.715 12:31:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:08.715 END TEST event_reactor 00:05:08.715 ************************************ 00:05:08.973 12:31:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.973 12:31:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:08.973 12:31:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.973 12:31:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:08.973 ************************************ 00:05:08.973 START TEST event_reactor_perf 00:05:08.973 ************************************ 00:05:08.973 12:31:08 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:08.973 [2024-12-14 12:31:08.516987] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:08.973 [2024-12-14 12:31:08.517269] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60019 ] 00:05:08.973 [2024-12-14 12:31:08.676874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.231 [2024-12-14 12:31:08.771190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.604 test_start 00:05:10.604 test_end 00:05:10.604 Performance: 316894 events per second 00:05:10.604 ************************************ 00:05:10.604 END TEST event_reactor_perf 00:05:10.604 ************************************ 00:05:10.604 00:05:10.604 real 0m1.431s 00:05:10.604 user 0m1.256s 00:05:10.604 sys 0m0.065s 00:05:10.604 12:31:09 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.604 12:31:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:10.604 12:31:09 event -- event/event.sh@49 -- # uname -s 00:05:10.604 12:31:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:10.604 12:31:09 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:10.604 12:31:09 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.604 12:31:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.604 12:31:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.604 ************************************ 00:05:10.604 START TEST event_scheduler 00:05:10.604 ************************************ 00:05:10.604 12:31:09 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:10.604 * Looking for test storage... 00:05:10.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.604 12:31:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.604 --rc genhtml_branch_coverage=1 00:05:10.604 --rc genhtml_function_coverage=1 00:05:10.604 --rc genhtml_legend=1 00:05:10.604 --rc geninfo_all_blocks=1 00:05:10.604 --rc geninfo_unexecuted_blocks=1 00:05:10.604 00:05:10.604 ' 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.604 --rc genhtml_branch_coverage=1 00:05:10.604 --rc genhtml_function_coverage=1 00:05:10.604 --rc genhtml_legend=1 00:05:10.604 --rc geninfo_all_blocks=1 00:05:10.604 --rc geninfo_unexecuted_blocks=1 00:05:10.604 00:05:10.604 ' 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.604 --rc genhtml_branch_coverage=1 00:05:10.604 --rc genhtml_function_coverage=1 00:05:10.604 --rc genhtml_legend=1 00:05:10.604 --rc geninfo_all_blocks=1 00:05:10.604 --rc geninfo_unexecuted_blocks=1 00:05:10.604 00:05:10.604 ' 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.604 --rc genhtml_branch_coverage=1 00:05:10.604 --rc genhtml_function_coverage=1 00:05:10.604 --rc genhtml_legend=1 00:05:10.604 --rc geninfo_all_blocks=1 00:05:10.604 --rc geninfo_unexecuted_blocks=1 00:05:10.604 00:05:10.604 ' 00:05:10.604 12:31:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:10.604 12:31:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60095 00:05:10.604 12:31:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:10.604 12:31:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.604 12:31:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60095 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60095 ']' 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.604 12:31:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.604 [2024-12-14 12:31:10.150461] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:10.604 [2024-12-14 12:31:10.150676] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60095 ] 00:05:10.604 [2024-12-14 12:31:10.296992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:10.862 [2024-12-14 12:31:10.395737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.862 [2024-12-14 12:31:10.395924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.862 [2024-12-14 12:31:10.396043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.862 [2024-12-14 12:31:10.396045] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.428 12:31:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.428 12:31:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:11.428 12:31:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:11.428 12:31:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.428 12:31:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.428 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.428 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.428 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.428 POWER: Cannot set governor of lcore 0 to performance 00:05:11.428 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.428 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.428 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:11.428 POWER: Cannot set governor of lcore 0 to userspace 00:05:11.428 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:11.428 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:11.428 POWER: Unable to set Power Management Environment for lcore 0 00:05:11.428 [2024-12-14 12:31:11.002080] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:11.428 [2024-12-14 12:31:11.002118] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:11.428 [2024-12-14 12:31:11.002140] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:11.428 [2024-12-14 12:31:11.002205] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:11.428 [2024-12-14 12:31:11.002227] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:11.428 [2024-12-14 12:31:11.002247] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:11.428 12:31:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.428 12:31:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:11.428 12:31:11 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.428 12:31:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.686 [2024-12-14 12:31:11.231079] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:11.686 12:31:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.686 12:31:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:11.686 12:31:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.686 12:31:11 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.686 12:31:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:11.686 ************************************ 00:05:11.686 START TEST scheduler_create_thread 00:05:11.686 ************************************ 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.686 2 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.686 3 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.686 4 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.686 5 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.686 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.686 6 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.687 7 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.687 8 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.687 9 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.687 10 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:11.687 12:31:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 ************************************ 00:05:13.078 END TEST scheduler_create_thread 00:05:13.078 ************************************ 00:05:13.078 12:31:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.078 00:05:13.078 real 0m1.176s 00:05:13.078 user 0m0.013s 00:05:13.078 sys 0m0.006s 00:05:13.078 12:31:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.078 12:31:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.078 12:31:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:13.078 12:31:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60095 00:05:13.078 12:31:12 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60095 ']' 00:05:13.078 12:31:12 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60095 00:05:13.078 12:31:12 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:13.078 12:31:12 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.078 12:31:12 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60095 00:05:13.078 killing process with pid 60095 00:05:13.078 12:31:12 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:13.078 12:31:12 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:13.078 12:31:12 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60095' 00:05:13.078 12:31:12 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60095 00:05:13.078 12:31:12 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60095 00:05:13.336 [2024-12-14 12:31:12.899734] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:13.906 ************************************ 00:05:13.906 END TEST event_scheduler 00:05:13.906 ************************************ 00:05:13.906 00:05:13.906 real 0m3.506s 00:05:13.906 user 0m5.801s 00:05:13.906 sys 0m0.349s 00:05:13.906 12:31:13 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.906 12:31:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:13.906 12:31:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:13.906 12:31:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:13.906 12:31:13 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.906 12:31:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.906 12:31:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.906 ************************************ 00:05:13.906 START TEST app_repeat 00:05:13.906 ************************************ 00:05:13.906 12:31:13 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:13.906 12:31:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.906 12:31:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.906 12:31:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:13.906 12:31:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.906 12:31:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:13.906 12:31:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:13.906 12:31:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:13.906 Process app_repeat pid: 60179 00:05:13.906 spdk_app_start Round 0 00:05:13.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.906 12:31:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60179 00:05:13.906 12:31:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.906 12:31:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60179' 00:05:13.906 12:31:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.906 12:31:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:13.906 12:31:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60179 /var/tmp/spdk-nbd.sock 00:05:13.906 12:31:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60179 ']' 00:05:13.906 12:31:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.906 12:31:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.906 12:31:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.906 12:31:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.906 12:31:13 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:13.906 12:31:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.906 [2024-12-14 12:31:13.563216] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:13.906 [2024-12-14 12:31:13.563345] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60179 ] 00:05:14.166 [2024-12-14 12:31:13.721484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.166 [2024-12-14 12:31:13.800675] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.166 [2024-12-14 12:31:13.800759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.731 12:31:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.731 12:31:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.731 12:31:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.989 Malloc0 00:05:14.989 12:31:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.248 Malloc1 00:05:15.248 12:31:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.248 12:31:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.507 /dev/nbd0 00:05:15.507 12:31:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.507 12:31:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.507 12:31:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.507 12:31:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.507 12:31:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.507 12:31:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.507 12:31:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.507 12:31:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.507 12:31:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.507 12:31:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.507 12:31:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.507 1+0 records in 00:05:15.507 1+0 records out 00:05:15.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000180554 s, 22.7 MB/s 00:05:15.507 12:31:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.507 12:31:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.507 12:31:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.507 12:31:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.507 12:31:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.507 12:31:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.507 12:31:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.507 12:31:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.765 /dev/nbd1 00:05:15.765 12:31:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.765 12:31:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.765 12:31:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.765 12:31:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.765 12:31:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.765 12:31:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.765 12:31:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.765 12:31:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.765 12:31:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.765 12:31:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.765 12:31:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.765 1+0 records in 00:05:15.765 1+0 records out 00:05:15.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199156 s, 20.6 MB/s 00:05:15.765 12:31:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.766 12:31:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.766 12:31:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.766 12:31:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.766 12:31:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.766 12:31:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.766 12:31:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.766 12:31:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.766 12:31:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.766 12:31:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.024 { 00:05:16.024 "nbd_device": "/dev/nbd0", 00:05:16.024 "bdev_name": "Malloc0" 00:05:16.024 }, 00:05:16.024 { 00:05:16.024 "nbd_device": "/dev/nbd1", 00:05:16.024 "bdev_name": "Malloc1" 00:05:16.024 } 00:05:16.024 ]' 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.024 { 00:05:16.024 "nbd_device": "/dev/nbd0", 00:05:16.024 "bdev_name": "Malloc0" 00:05:16.024 }, 00:05:16.024 { 00:05:16.024 "nbd_device": "/dev/nbd1", 00:05:16.024 "bdev_name": "Malloc1" 00:05:16.024 } 00:05:16.024 ]' 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.024 /dev/nbd1' 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.024 /dev/nbd1' 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.024 256+0 records in 00:05:16.024 256+0 records out 00:05:16.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00772207 s, 136 MB/s 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.024 256+0 records in 00:05:16.024 256+0 records out 00:05:16.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02018 s, 52.0 MB/s 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.024 256+0 records in 00:05:16.024 256+0 records out 00:05:16.024 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0188774 s, 55.5 MB/s 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.024 12:31:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.025 12:31:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.025 12:31:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.025 12:31:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.025 12:31:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.025 12:31:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.025 12:31:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.025 12:31:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.025 12:31:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:16.025 12:31:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.025 12:31:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.283 12:31:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.283 12:31:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.283 12:31:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.283 12:31:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.283 12:31:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.283 12:31:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.283 12:31:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.283 12:31:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.283 12:31:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.283 12:31:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.541 12:31:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.541 12:31:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.541 12:31:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.541 12:31:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.541 12:31:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.541 12:31:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.541 12:31:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.541 12:31:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.541 12:31:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.541 12:31:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.541 12:31:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.798 12:31:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.798 12:31:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.798 12:31:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.798 12:31:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.798 12:31:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.798 12:31:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.798 12:31:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.798 12:31:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.798 12:31:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.798 12:31:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.798 12:31:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.798 12:31:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.799 12:31:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.057 12:31:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.623 [2024-12-14 12:31:17.213561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.623 [2024-12-14 12:31:17.290874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.623 [2024-12-14 12:31:17.290976] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.881 [2024-12-14 12:31:17.388744] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.881 [2024-12-14 12:31:17.388811] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.468 spdk_app_start Round 1 00:05:20.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.468 12:31:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.468 12:31:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:20.468 12:31:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60179 /var/tmp/spdk-nbd.sock 00:05:20.468 12:31:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60179 ']' 00:05:20.468 12:31:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.468 12:31:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.468 12:31:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.468 12:31:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.468 12:31:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.468 12:31:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.468 12:31:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.468 12:31:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.468 Malloc0 00:05:20.468 12:31:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.726 Malloc1 00:05:20.726 12:31:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.726 12:31:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.984 /dev/nbd0 00:05:20.984 12:31:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.984 12:31:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.984 12:31:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:20.984 12:31:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.984 12:31:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.984 12:31:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.984 12:31:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:20.984 12:31:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.984 12:31:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.984 12:31:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.984 12:31:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.984 1+0 records in 00:05:20.984 1+0 records out 00:05:20.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280214 s, 14.6 MB/s 00:05:20.984 12:31:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.984 12:31:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.984 12:31:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.984 12:31:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.984 12:31:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.984 12:31:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.984 12:31:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.984 12:31:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:21.242 /dev/nbd1 00:05:21.242 12:31:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:21.242 12:31:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:21.242 12:31:20 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:21.242 12:31:20 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:21.242 12:31:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:21.242 12:31:20 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:21.242 12:31:20 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:21.242 12:31:20 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:21.242 12:31:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:21.242 12:31:20 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:21.242 12:31:20 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:21.242 1+0 records in 00:05:21.242 1+0 records out 00:05:21.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197126 s, 20.8 MB/s 00:05:21.242 12:31:20 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.242 12:31:20 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:21.242 12:31:20 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:21.242 12:31:20 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:21.242 12:31:20 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:21.242 12:31:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:21.242 12:31:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:21.242 12:31:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.242 12:31:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.242 12:31:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.508 12:31:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:21.508 { 00:05:21.508 "nbd_device": "/dev/nbd0", 00:05:21.508 "bdev_name": "Malloc0" 00:05:21.508 }, 00:05:21.508 { 00:05:21.508 "nbd_device": "/dev/nbd1", 00:05:21.508 "bdev_name": "Malloc1" 00:05:21.508 } 00:05:21.508 ]' 00:05:21.508 12:31:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:21.508 { 00:05:21.508 "nbd_device": "/dev/nbd0", 00:05:21.508 "bdev_name": "Malloc0" 00:05:21.508 }, 00:05:21.508 { 00:05:21.508 "nbd_device": "/dev/nbd1", 00:05:21.508 "bdev_name": "Malloc1" 00:05:21.508 } 00:05:21.508 ]' 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:21.509 /dev/nbd1' 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:21.509 /dev/nbd1' 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:21.509 256+0 records in 00:05:21.509 256+0 records out 00:05:21.509 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418008 s, 251 MB/s 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:21.509 256+0 records in 00:05:21.509 256+0 records out 00:05:21.509 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162148 s, 64.7 MB/s 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:21.509 256+0 records in 00:05:21.509 256+0 records out 00:05:21.509 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143049 s, 73.3 MB/s 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.509 12:31:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.767 12:31:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.767 12:31:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.767 12:31:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.768 12:31:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.768 12:31:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.768 12:31:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.768 12:31:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.768 12:31:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.768 12:31:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.768 12:31:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:22.025 12:31:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:22.025 12:31:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:22.025 12:31:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:22.025 12:31:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:22.025 12:31:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:22.025 12:31:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:22.025 12:31:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:22.025 12:31:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:22.025 12:31:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.025 12:31:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.025 12:31:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.025 12:31:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:22.025 12:31:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:22.025 12:31:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.283 12:31:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:22.283 12:31:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:22.283 12:31:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.283 12:31:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:22.283 12:31:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:22.283 12:31:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:22.283 12:31:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:22.283 12:31:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:22.283 12:31:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:22.283 12:31:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:22.541 12:31:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:23.107 [2024-12-14 12:31:22.609966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.107 [2024-12-14 12:31:22.688335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.107 [2024-12-14 12:31:22.688431] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.107 [2024-12-14 12:31:22.784906] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:23.107 [2024-12-14 12:31:22.784959] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.637 spdk_app_start Round 2 00:05:25.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.637 12:31:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:25.637 12:31:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:25.637 12:31:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60179 /var/tmp/spdk-nbd.sock 00:05:25.637 12:31:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60179 ']' 00:05:25.637 12:31:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.637 12:31:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.637 12:31:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.638 12:31:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.638 12:31:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.638 12:31:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.638 12:31:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:25.638 12:31:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:25.895 Malloc0 00:05:25.895 12:31:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.153 Malloc1 00:05:26.153 12:31:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.153 12:31:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.410 /dev/nbd0 00:05:26.410 12:31:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.410 12:31:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.410 12:31:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:26.410 12:31:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.410 12:31:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.410 12:31:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.410 12:31:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:26.410 12:31:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.410 12:31:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.410 12:31:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.410 12:31:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.410 1+0 records in 00:05:26.410 1+0 records out 00:05:26.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238591 s, 17.2 MB/s 00:05:26.410 12:31:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.410 12:31:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.410 12:31:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.410 12:31:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.410 12:31:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.410 12:31:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.410 12:31:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.410 12:31:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.669 /dev/nbd1 00:05:26.669 12:31:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.669 12:31:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.669 12:31:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:26.669 12:31:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:26.669 12:31:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:26.669 12:31:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:26.669 12:31:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:26.669 12:31:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:26.669 12:31:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:26.669 12:31:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:26.669 12:31:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.669 1+0 records in 00:05:26.669 1+0 records out 00:05:26.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173377 s, 23.6 MB/s 00:05:26.669 12:31:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.669 12:31:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:26.669 12:31:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:26.669 12:31:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:26.669 12:31:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:26.669 12:31:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.669 12:31:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.669 12:31:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.669 12:31:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.669 12:31:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.669 12:31:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:26.669 { 00:05:26.669 "nbd_device": "/dev/nbd0", 00:05:26.669 "bdev_name": "Malloc0" 00:05:26.669 }, 00:05:26.669 { 00:05:26.669 "nbd_device": "/dev/nbd1", 00:05:26.669 "bdev_name": "Malloc1" 00:05:26.669 } 00:05:26.669 ]' 00:05:26.669 12:31:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.669 12:31:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:26.669 { 00:05:26.669 "nbd_device": "/dev/nbd0", 00:05:26.669 "bdev_name": "Malloc0" 00:05:26.669 }, 00:05:26.669 { 00:05:26.669 "nbd_device": "/dev/nbd1", 00:05:26.669 "bdev_name": "Malloc1" 00:05:26.669 } 00:05:26.669 ]' 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:26.928 /dev/nbd1' 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:26.928 /dev/nbd1' 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:26.928 256+0 records in 00:05:26.928 256+0 records out 00:05:26.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00731524 s, 143 MB/s 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:26.928 256+0 records in 00:05:26.928 256+0 records out 00:05:26.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168617 s, 62.2 MB/s 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:26.928 256+0 records in 00:05:26.928 256+0 records out 00:05:26.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01874 s, 56.0 MB/s 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:26.928 12:31:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.185 12:31:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.185 12:31:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.185 12:31:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.185 12:31:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.185 12:31:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.185 12:31:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.185 12:31:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.185 12:31:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.186 12:31:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.186 12:31:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.186 12:31:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.186 12:31:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.186 12:31:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.186 12:31:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.186 12:31:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.186 12:31:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.444 12:31:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:27.444 12:31:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.444 12:31:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.444 12:31:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.444 12:31:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.444 12:31:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.444 12:31:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.444 12:31:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.444 12:31:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.444 12:31:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.444 12:31:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.444 12:31:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:27.444 12:31:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.444 12:31:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.444 12:31:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.444 12:31:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.444 12:31:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.444 12:31:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:28.010 12:31:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:28.268 [2024-12-14 12:31:28.002757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.526 [2024-12-14 12:31:28.080182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.527 [2024-12-14 12:31:28.080291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.527 [2024-12-14 12:31:28.183787] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.527 [2024-12-14 12:31:28.183956] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.055 12:31:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60179 /var/tmp/spdk-nbd.sock 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60179 ']' 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:31.055 12:31:30 event.app_repeat -- event/event.sh@39 -- # killprocess 60179 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60179 ']' 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60179 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60179 00:05:31.055 killing process with pid 60179 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60179' 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60179 00:05:31.055 12:31:30 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60179 00:05:31.620 spdk_app_start is called in Round 0. 00:05:31.620 Shutdown signal received, stop current app iteration 00:05:31.620 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:31.620 spdk_app_start is called in Round 1. 00:05:31.620 Shutdown signal received, stop current app iteration 00:05:31.620 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:31.620 spdk_app_start is called in Round 2. 00:05:31.620 Shutdown signal received, stop current app iteration 00:05:31.620 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:31.620 spdk_app_start is called in Round 3. 00:05:31.620 Shutdown signal received, stop current app iteration 00:05:31.620 12:31:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:31.620 12:31:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:31.620 00:05:31.620 real 0m17.684s 00:05:31.620 user 0m38.772s 00:05:31.620 sys 0m2.086s 00:05:31.620 ************************************ 00:05:31.620 END TEST app_repeat 00:05:31.620 ************************************ 00:05:31.620 12:31:31 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.620 12:31:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.620 12:31:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:31.620 12:31:31 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.620 12:31:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.620 12:31:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.620 12:31:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.620 ************************************ 00:05:31.620 START TEST cpu_locks 00:05:31.620 ************************************ 00:05:31.620 12:31:31 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:31.620 * Looking for test storage... 00:05:31.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:31.620 12:31:31 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.620 12:31:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.620 12:31:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.879 12:31:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.879 12:31:31 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:31.879 12:31:31 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.879 12:31:31 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.879 --rc genhtml_branch_coverage=1 00:05:31.879 --rc genhtml_function_coverage=1 00:05:31.879 --rc genhtml_legend=1 00:05:31.879 --rc geninfo_all_blocks=1 00:05:31.879 --rc geninfo_unexecuted_blocks=1 00:05:31.879 00:05:31.879 ' 00:05:31.879 12:31:31 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.879 --rc genhtml_branch_coverage=1 00:05:31.879 --rc genhtml_function_coverage=1 00:05:31.879 --rc genhtml_legend=1 00:05:31.879 --rc geninfo_all_blocks=1 00:05:31.879 --rc geninfo_unexecuted_blocks=1 00:05:31.879 00:05:31.879 ' 00:05:31.879 12:31:31 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.879 --rc genhtml_branch_coverage=1 00:05:31.879 --rc genhtml_function_coverage=1 00:05:31.879 --rc genhtml_legend=1 00:05:31.879 --rc geninfo_all_blocks=1 00:05:31.879 --rc geninfo_unexecuted_blocks=1 00:05:31.879 00:05:31.879 ' 00:05:31.879 12:31:31 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.879 --rc genhtml_branch_coverage=1 00:05:31.879 --rc genhtml_function_coverage=1 00:05:31.879 --rc genhtml_legend=1 00:05:31.879 --rc geninfo_all_blocks=1 00:05:31.879 --rc geninfo_unexecuted_blocks=1 00:05:31.879 00:05:31.879 ' 00:05:31.879 12:31:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:31.879 12:31:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:31.879 12:31:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:31.879 12:31:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:31.879 12:31:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.879 12:31:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.879 12:31:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.879 ************************************ 00:05:31.879 START TEST default_locks 00:05:31.879 ************************************ 00:05:31.879 12:31:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:31.879 12:31:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60609 00:05:31.879 12:31:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.879 12:31:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60609 00:05:31.879 12:31:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60609 ']' 00:05:31.879 12:31:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.879 12:31:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.879 12:31:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.879 12:31:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.879 12:31:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.879 [2024-12-14 12:31:31.461036] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:31.879 [2024-12-14 12:31:31.461282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60609 ] 00:05:32.138 [2024-12-14 12:31:31.615747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.138 [2024-12-14 12:31:31.712953] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.704 12:31:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.704 12:31:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:32.704 12:31:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60609 00:05:32.704 12:31:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60609 00:05:32.704 12:31:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:32.963 12:31:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60609 00:05:32.963 12:31:32 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60609 ']' 00:05:32.963 12:31:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60609 00:05:32.963 12:31:32 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:32.963 12:31:32 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.963 12:31:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60609 00:05:32.963 killing process with pid 60609 00:05:32.963 12:31:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.963 12:31:32 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.963 12:31:32 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60609' 00:05:32.963 12:31:32 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60609 00:05:32.963 12:31:32 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60609 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60609 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60609 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60609 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60609 ']' 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.336 ERROR: process (pid: 60609) is no longer running 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60609) - No such process 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:34.336 00:05:34.336 real 0m2.583s 00:05:34.336 user 0m2.594s 00:05:34.336 sys 0m0.425s 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.336 12:31:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.336 ************************************ 00:05:34.336 END TEST default_locks 00:05:34.336 ************************************ 00:05:34.336 12:31:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:34.336 12:31:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.336 12:31:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.336 12:31:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.336 ************************************ 00:05:34.336 START TEST default_locks_via_rpc 00:05:34.336 ************************************ 00:05:34.336 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:34.336 12:31:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60668 00:05:34.336 12:31:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60668 00:05:34.336 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60668 ']' 00:05:34.336 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.336 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.336 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.336 12:31:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.336 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.336 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.594 [2024-12-14 12:31:34.094264] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:34.594 [2024-12-14 12:31:34.094378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60668 ] 00:05:34.594 [2024-12-14 12:31:34.249138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.594 [2024-12-14 12:31:34.329088] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60668 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60668 00:05:35.528 12:31:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.528 12:31:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60668 00:05:35.528 12:31:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60668 ']' 00:05:35.528 12:31:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60668 00:05:35.528 12:31:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:35.528 12:31:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.528 12:31:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60668 00:05:35.528 killing process with pid 60668 00:05:35.528 12:31:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.528 12:31:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.528 12:31:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60668' 00:05:35.528 12:31:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60668 00:05:35.528 12:31:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60668 00:05:36.901 ************************************ 00:05:36.901 END TEST default_locks_via_rpc 00:05:36.901 ************************************ 00:05:36.901 00:05:36.901 real 0m2.312s 00:05:36.901 user 0m2.318s 00:05:36.901 sys 0m0.438s 00:05:36.901 12:31:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.901 12:31:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.901 12:31:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:36.901 12:31:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.901 12:31:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.901 12:31:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.901 ************************************ 00:05:36.901 START TEST non_locking_app_on_locked_coremask 00:05:36.901 ************************************ 00:05:36.901 12:31:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:36.901 12:31:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60720 00:05:36.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.901 12:31:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60720 /var/tmp/spdk.sock 00:05:36.901 12:31:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60720 ']' 00:05:36.901 12:31:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.901 12:31:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.901 12:31:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.901 12:31:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.902 12:31:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.902 12:31:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.902 [2024-12-14 12:31:36.452641] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:36.902 [2024-12-14 12:31:36.452749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60720 ] 00:05:36.902 [2024-12-14 12:31:36.610277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.160 [2024-12-14 12:31:36.706083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.804 12:31:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.804 12:31:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:37.804 12:31:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:37.804 12:31:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60736 00:05:37.804 12:31:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60736 /var/tmp/spdk2.sock 00:05:37.804 12:31:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60736 ']' 00:05:37.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.804 12:31:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.804 12:31:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.804 12:31:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.804 12:31:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.804 12:31:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.804 [2024-12-14 12:31:37.358232] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:37.804 [2024-12-14 12:31:37.358530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60736 ] 00:05:37.804 [2024-12-14 12:31:37.532136] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.804 [2024-12-14 12:31:37.532186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.062 [2024-12-14 12:31:37.732244] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.437 12:31:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.437 12:31:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:39.437 12:31:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60720 00:05:39.437 12:31:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:39.437 12:31:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60720 00:05:39.437 12:31:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60720 00:05:39.437 12:31:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60720 ']' 00:05:39.437 12:31:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60720 00:05:39.437 12:31:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:39.438 12:31:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.438 12:31:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60720 00:05:39.438 killing process with pid 60720 00:05:39.438 12:31:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:39.438 12:31:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:39.438 12:31:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60720' 00:05:39.438 12:31:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60720 00:05:39.438 12:31:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60720 00:05:41.968 12:31:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60736 00:05:41.968 12:31:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60736 ']' 00:05:41.968 12:31:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60736 00:05:41.968 12:31:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:41.968 12:31:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.968 12:31:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60736 00:05:41.968 killing process with pid 60736 00:05:41.968 12:31:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:41.968 12:31:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:41.968 12:31:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60736' 00:05:41.968 12:31:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60736 00:05:41.968 12:31:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60736 00:05:42.906 ************************************ 00:05:42.907 END TEST non_locking_app_on_locked_coremask 00:05:42.907 ************************************ 00:05:42.907 00:05:42.907 real 0m6.182s 00:05:42.907 user 0m6.456s 00:05:42.907 sys 0m0.796s 00:05:42.907 12:31:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.907 12:31:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.907 12:31:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:42.907 12:31:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.907 12:31:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.907 12:31:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.907 ************************************ 00:05:42.907 START TEST locking_app_on_unlocked_coremask 00:05:42.907 ************************************ 00:05:42.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.907 12:31:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:42.907 12:31:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60827 00:05:42.907 12:31:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60827 /var/tmp/spdk.sock 00:05:42.907 12:31:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60827 ']' 00:05:42.907 12:31:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.907 12:31:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.907 12:31:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.907 12:31:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.907 12:31:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.907 12:31:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:43.165 [2024-12-14 12:31:42.663688] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:43.165 [2024-12-14 12:31:42.664234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60827 ] 00:05:43.165 [2024-12-14 12:31:42.813043] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:43.165 [2024-12-14 12:31:42.813087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.165 [2024-12-14 12:31:42.889574] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.100 12:31:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.100 12:31:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.100 12:31:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.100 12:31:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60843 00:05:44.100 12:31:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60843 /var/tmp/spdk2.sock 00:05:44.100 12:31:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60843 ']' 00:05:44.100 12:31:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.100 12:31:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.100 12:31:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.100 12:31:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.100 12:31:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.100 [2024-12-14 12:31:43.567391] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:44.100 [2024-12-14 12:31:43.567664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60843 ] 00:05:44.100 [2024-12-14 12:31:43.732635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.359 [2024-12-14 12:31:43.893940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.331 12:31:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.331 12:31:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:45.331 12:31:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60843 00:05:45.331 12:31:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60843 00:05:45.331 12:31:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.607 12:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60827 00:05:45.607 12:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60827 ']' 00:05:45.607 12:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60827 00:05:45.607 12:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.607 12:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.607 12:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60827 00:05:45.607 killing process with pid 60827 00:05:45.607 12:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.607 12:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.607 12:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60827' 00:05:45.607 12:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60827 00:05:45.607 12:31:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60827 00:05:48.136 12:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60843 00:05:48.136 12:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60843 ']' 00:05:48.136 12:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60843 00:05:48.136 12:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:48.136 12:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.136 12:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60843 00:05:48.136 killing process with pid 60843 00:05:48.136 12:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.136 12:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.136 12:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60843' 00:05:48.136 12:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60843 00:05:48.136 12:31:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60843 00:05:49.071 00:05:49.071 real 0m6.114s 00:05:49.071 user 0m6.377s 00:05:49.071 sys 0m0.814s 00:05:49.071 12:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.071 12:31:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.071 ************************************ 00:05:49.071 END TEST locking_app_on_unlocked_coremask 00:05:49.071 ************************************ 00:05:49.071 12:31:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:49.071 12:31:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.071 12:31:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.071 12:31:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.071 ************************************ 00:05:49.071 START TEST locking_app_on_locked_coremask 00:05:49.071 ************************************ 00:05:49.071 12:31:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:49.071 12:31:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60934 00:05:49.071 12:31:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60934 /var/tmp/spdk.sock 00:05:49.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.071 12:31:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60934 ']' 00:05:49.071 12:31:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.071 12:31:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.071 12:31:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.071 12:31:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.071 12:31:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.071 12:31:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.329 [2024-12-14 12:31:48.833901] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:49.329 [2024-12-14 12:31:48.834020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60934 ] 00:05:49.329 [2024-12-14 12:31:48.983821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.329 [2024-12-14 12:31:49.063670] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60950 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60950 /var/tmp/spdk2.sock 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60950 /var/tmp/spdk2.sock 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60950 /var/tmp/spdk2.sock 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60950 ']' 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.894 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.151 12:31:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.151 [2024-12-14 12:31:49.702571] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:50.151 [2024-12-14 12:31:49.702889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60950 ] 00:05:50.151 [2024-12-14 12:31:49.867686] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60934 has claimed it. 00:05:50.151 [2024-12-14 12:31:49.867732] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:50.716 ERROR: process (pid: 60950) is no longer running 00:05:50.716 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60950) - No such process 00:05:50.717 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.717 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:50.717 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:50.717 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:50.717 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:50.717 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:50.717 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60934 00:05:50.717 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60934 00:05:50.717 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:50.976 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60934 00:05:50.976 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60934 ']' 00:05:50.976 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60934 00:05:50.976 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:50.976 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.976 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60934 00:05:50.976 killing process with pid 60934 00:05:50.976 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.976 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.976 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60934' 00:05:50.976 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60934 00:05:50.976 12:31:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60934 00:05:52.359 ************************************ 00:05:52.360 END TEST locking_app_on_locked_coremask 00:05:52.360 ************************************ 00:05:52.360 00:05:52.360 real 0m2.963s 00:05:52.360 user 0m3.147s 00:05:52.360 sys 0m0.541s 00:05:52.360 12:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.360 12:31:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.360 12:31:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:52.360 12:31:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.360 12:31:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.360 12:31:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.360 ************************************ 00:05:52.360 START TEST locking_overlapped_coremask 00:05:52.360 ************************************ 00:05:52.360 12:31:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:52.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.360 12:31:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61003 00:05:52.360 12:31:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61003 /var/tmp/spdk.sock 00:05:52.360 12:31:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61003 ']' 00:05:52.360 12:31:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.360 12:31:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.360 12:31:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.360 12:31:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.360 12:31:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.360 12:31:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:52.360 [2024-12-14 12:31:51.843936] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:52.360 [2024-12-14 12:31:51.844050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61003 ] 00:05:52.360 [2024-12-14 12:31:51.998097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.360 [2024-12-14 12:31:52.082591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.360 [2024-12-14 12:31:52.082836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.360 [2024-12-14 12:31:52.082864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61021 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61021 /var/tmp/spdk2.sock 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61021 /var/tmp/spdk2.sock 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61021 /var/tmp/spdk2.sock 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61021 ']' 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.930 12:31:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.190 [2024-12-14 12:31:52.699497] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:53.190 [2024-12-14 12:31:52.699770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61021 ] 00:05:53.190 [2024-12-14 12:31:52.885741] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61003 has claimed it. 00:05:53.190 [2024-12-14 12:31:52.885794] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.769 ERROR: process (pid: 61021) is no longer running 00:05:53.769 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61021) - No such process 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61003 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61003 ']' 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61003 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61003 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61003' 00:05:53.769 killing process with pid 61003 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61003 00:05:53.769 12:31:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61003 00:05:55.158 00:05:55.158 real 0m2.773s 00:05:55.158 user 0m7.538s 00:05:55.158 sys 0m0.391s 00:05:55.158 12:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.158 12:31:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.158 ************************************ 00:05:55.158 END TEST locking_overlapped_coremask 00:05:55.158 ************************************ 00:05:55.158 12:31:54 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:55.158 12:31:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.158 12:31:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.158 12:31:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.158 ************************************ 00:05:55.158 START TEST locking_overlapped_coremask_via_rpc 00:05:55.158 ************************************ 00:05:55.158 12:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:55.158 12:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61074 00:05:55.158 12:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61074 /var/tmp/spdk.sock 00:05:55.158 12:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61074 ']' 00:05:55.158 12:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:55.158 12:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.158 12:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.158 12:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.158 12:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.158 12:31:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.158 [2024-12-14 12:31:54.650818] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:55.158 [2024-12-14 12:31:54.650910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61074 ] 00:05:55.158 [2024-12-14 12:31:54.800826] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.158 [2024-12-14 12:31:54.800861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.158 [2024-12-14 12:31:54.880543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.158 [2024-12-14 12:31:54.880799] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.158 [2024-12-14 12:31:54.880821] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.092 12:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.092 12:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:56.092 12:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:56.092 12:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61092 00:05:56.092 12:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61092 /var/tmp/spdk2.sock 00:05:56.092 12:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61092 ']' 00:05:56.092 12:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.092 12:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.092 12:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.092 12:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.092 12:31:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.092 [2024-12-14 12:31:55.560007] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:56.092 [2024-12-14 12:31:55.560628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61092 ] 00:05:56.092 [2024-12-14 12:31:55.733199] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.092 [2024-12-14 12:31:55.733235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.350 [2024-12-14 12:31:55.902006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.350 [2024-12-14 12:31:55.902032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.350 [2024-12-14 12:31:55.902079] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:05:57.283 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.284 [2024-12-14 12:31:56.863179] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61074 has claimed it. 00:05:57.284 request: 00:05:57.284 { 00:05:57.284 "method": "framework_enable_cpumask_locks", 00:05:57.284 "req_id": 1 00:05:57.284 } 00:05:57.284 Got JSON-RPC error response 00:05:57.284 response: 00:05:57.284 { 00:05:57.284 "code": -32603, 00:05:57.284 "message": "Failed to claim CPU core: 2" 00:05:57.284 } 00:05:57.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61074 /var/tmp/spdk.sock 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61074 ']' 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.284 12:31:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.542 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.542 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:57.542 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61092 /var/tmp/spdk2.sock 00:05:57.542 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61092 ']' 00:05:57.542 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.542 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.542 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.542 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.542 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.801 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.801 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:57.801 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:57.801 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.801 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.801 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.801 00:05:57.801 real 0m2.708s 00:05:57.801 user 0m1.071s 00:05:57.801 sys 0m0.141s 00:05:57.801 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.801 12:31:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.801 ************************************ 00:05:57.801 END TEST locking_overlapped_coremask_via_rpc 00:05:57.801 ************************************ 00:05:57.801 12:31:57 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:57.801 12:31:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61074 ]] 00:05:57.801 12:31:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61074 00:05:57.801 12:31:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61074 ']' 00:05:57.801 12:31:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61074 00:05:57.801 12:31:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:57.801 12:31:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.801 12:31:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61074 00:05:57.801 killing process with pid 61074 00:05:57.801 12:31:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.801 12:31:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.801 12:31:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61074' 00:05:57.801 12:31:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61074 00:05:57.801 12:31:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61074 00:05:59.175 12:31:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61092 ]] 00:05:59.175 12:31:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61092 00:05:59.175 12:31:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61092 ']' 00:05:59.175 12:31:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61092 00:05:59.175 12:31:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:59.175 12:31:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.175 12:31:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61092 00:05:59.175 killing process with pid 61092 00:05:59.175 12:31:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:59.175 12:31:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:59.175 12:31:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61092' 00:05:59.175 12:31:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61092 00:05:59.175 12:31:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61092 00:06:00.110 12:31:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.110 Process with pid 61074 is not found 00:06:00.110 Process with pid 61092 is not found 00:06:00.110 12:31:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:00.110 12:31:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61074 ]] 00:06:00.110 12:31:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61074 00:06:00.110 12:31:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61074 ']' 00:06:00.110 12:31:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61074 00:06:00.110 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61074) - No such process 00:06:00.110 12:31:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61074 is not found' 00:06:00.110 12:31:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61092 ]] 00:06:00.110 12:31:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61092 00:06:00.110 12:31:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61092 ']' 00:06:00.110 12:31:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61092 00:06:00.110 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61092) - No such process 00:06:00.110 12:31:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61092 is not found' 00:06:00.110 12:31:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:00.110 ************************************ 00:06:00.110 END TEST cpu_locks 00:06:00.110 ************************************ 00:06:00.110 00:06:00.110 real 0m28.595s 00:06:00.110 user 0m48.996s 00:06:00.110 sys 0m4.331s 00:06:00.110 12:31:59 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.110 12:31:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.368 ************************************ 00:06:00.368 END TEST event 00:06:00.368 ************************************ 00:06:00.368 00:06:00.368 real 0m54.445s 00:06:00.368 user 1m40.440s 00:06:00.368 sys 0m7.213s 00:06:00.368 12:31:59 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.368 12:31:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.368 12:31:59 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:00.368 12:31:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.368 12:31:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.368 12:31:59 -- common/autotest_common.sh@10 -- # set +x 00:06:00.368 ************************************ 00:06:00.368 START TEST thread 00:06:00.368 ************************************ 00:06:00.368 12:31:59 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:00.368 * Looking for test storage... 00:06:00.368 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:00.368 12:31:59 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:00.368 12:31:59 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:00.368 12:31:59 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:00.368 12:32:00 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:00.368 12:32:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.368 12:32:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.368 12:32:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.368 12:32:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.368 12:32:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.368 12:32:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.369 12:32:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.369 12:32:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.369 12:32:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.369 12:32:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.369 12:32:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.369 12:32:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:00.369 12:32:00 thread -- scripts/common.sh@345 -- # : 1 00:06:00.369 12:32:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.369 12:32:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.369 12:32:00 thread -- scripts/common.sh@365 -- # decimal 1 00:06:00.369 12:32:00 thread -- scripts/common.sh@353 -- # local d=1 00:06:00.369 12:32:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.369 12:32:00 thread -- scripts/common.sh@355 -- # echo 1 00:06:00.369 12:32:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.369 12:32:00 thread -- scripts/common.sh@366 -- # decimal 2 00:06:00.369 12:32:00 thread -- scripts/common.sh@353 -- # local d=2 00:06:00.369 12:32:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.369 12:32:00 thread -- scripts/common.sh@355 -- # echo 2 00:06:00.369 12:32:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.369 12:32:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.369 12:32:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.369 12:32:00 thread -- scripts/common.sh@368 -- # return 0 00:06:00.369 12:32:00 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.369 12:32:00 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:00.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.369 --rc genhtml_branch_coverage=1 00:06:00.369 --rc genhtml_function_coverage=1 00:06:00.369 --rc genhtml_legend=1 00:06:00.369 --rc geninfo_all_blocks=1 00:06:00.369 --rc geninfo_unexecuted_blocks=1 00:06:00.369 00:06:00.369 ' 00:06:00.369 12:32:00 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:00.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.369 --rc genhtml_branch_coverage=1 00:06:00.369 --rc genhtml_function_coverage=1 00:06:00.369 --rc genhtml_legend=1 00:06:00.369 --rc geninfo_all_blocks=1 00:06:00.369 --rc geninfo_unexecuted_blocks=1 00:06:00.369 00:06:00.369 ' 00:06:00.369 12:32:00 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:00.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.369 --rc genhtml_branch_coverage=1 00:06:00.369 --rc genhtml_function_coverage=1 00:06:00.369 --rc genhtml_legend=1 00:06:00.369 --rc geninfo_all_blocks=1 00:06:00.369 --rc geninfo_unexecuted_blocks=1 00:06:00.369 00:06:00.369 ' 00:06:00.369 12:32:00 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:00.369 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.369 --rc genhtml_branch_coverage=1 00:06:00.369 --rc genhtml_function_coverage=1 00:06:00.369 --rc genhtml_legend=1 00:06:00.369 --rc geninfo_all_blocks=1 00:06:00.369 --rc geninfo_unexecuted_blocks=1 00:06:00.369 00:06:00.369 ' 00:06:00.369 12:32:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.369 12:32:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:00.369 12:32:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.369 12:32:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.369 ************************************ 00:06:00.369 START TEST thread_poller_perf 00:06:00.369 ************************************ 00:06:00.369 12:32:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.369 [2024-12-14 12:32:00.091237] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:00.369 [2024-12-14 12:32:00.091343] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61247 ] 00:06:00.627 [2024-12-14 12:32:00.252738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.627 [2024-12-14 12:32:00.350452] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.627 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:02.049 [2024-12-14T12:32:01.786Z] ====================================== 00:06:02.049 [2024-12-14T12:32:01.786Z] busy:2611697154 (cyc) 00:06:02.049 [2024-12-14T12:32:01.786Z] total_run_count: 307000 00:06:02.049 [2024-12-14T12:32:01.786Z] tsc_hz: 2600000000 (cyc) 00:06:02.049 [2024-12-14T12:32:01.786Z] ====================================== 00:06:02.049 [2024-12-14T12:32:01.786Z] poller_cost: 8507 (cyc), 3271 (nsec) 00:06:02.049 00:06:02.049 real 0m1.455s 00:06:02.049 user 0m1.282s 00:06:02.049 sys 0m0.064s 00:06:02.049 12:32:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.049 12:32:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:02.049 ************************************ 00:06:02.049 END TEST thread_poller_perf 00:06:02.049 ************************************ 00:06:02.049 12:32:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.049 12:32:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:02.049 12:32:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.049 12:32:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.049 ************************************ 00:06:02.049 START TEST thread_poller_perf 00:06:02.049 ************************************ 00:06:02.049 12:32:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.049 [2024-12-14 12:32:01.601119] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:02.049 [2024-12-14 12:32:01.601340] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61283 ] 00:06:02.049 [2024-12-14 12:32:01.764275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.307 [2024-12-14 12:32:01.862341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.307 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:03.681 [2024-12-14T12:32:03.418Z] ====================================== 00:06:03.681 [2024-12-14T12:32:03.418Z] busy:2603635454 (cyc) 00:06:03.681 [2024-12-14T12:32:03.418Z] total_run_count: 3642000 00:06:03.681 [2024-12-14T12:32:03.418Z] tsc_hz: 2600000000 (cyc) 00:06:03.681 [2024-12-14T12:32:03.419Z] ====================================== 00:06:03.682 [2024-12-14T12:32:03.419Z] poller_cost: 714 (cyc), 274 (nsec) 00:06:03.682 00:06:03.682 real 0m1.455s 00:06:03.682 user 0m1.273s 00:06:03.682 sys 0m0.073s 00:06:03.682 12:32:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.682 12:32:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:03.682 ************************************ 00:06:03.682 END TEST thread_poller_perf 00:06:03.682 ************************************ 00:06:03.682 12:32:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:03.682 00:06:03.682 real 0m3.145s 00:06:03.682 user 0m2.672s 00:06:03.682 sys 0m0.239s 00:06:03.682 ************************************ 00:06:03.682 END TEST thread 00:06:03.682 ************************************ 00:06:03.682 12:32:03 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.682 12:32:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.682 12:32:03 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:03.682 12:32:03 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:03.682 12:32:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.682 12:32:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.682 12:32:03 -- common/autotest_common.sh@10 -- # set +x 00:06:03.682 ************************************ 00:06:03.682 START TEST app_cmdline 00:06:03.682 ************************************ 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:03.682 * Looking for test storage... 00:06:03.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.682 12:32:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.682 --rc genhtml_branch_coverage=1 00:06:03.682 --rc genhtml_function_coverage=1 00:06:03.682 --rc genhtml_legend=1 00:06:03.682 --rc geninfo_all_blocks=1 00:06:03.682 --rc geninfo_unexecuted_blocks=1 00:06:03.682 00:06:03.682 ' 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.682 --rc genhtml_branch_coverage=1 00:06:03.682 --rc genhtml_function_coverage=1 00:06:03.682 --rc genhtml_legend=1 00:06:03.682 --rc geninfo_all_blocks=1 00:06:03.682 --rc geninfo_unexecuted_blocks=1 00:06:03.682 00:06:03.682 ' 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.682 --rc genhtml_branch_coverage=1 00:06:03.682 --rc genhtml_function_coverage=1 00:06:03.682 --rc genhtml_legend=1 00:06:03.682 --rc geninfo_all_blocks=1 00:06:03.682 --rc geninfo_unexecuted_blocks=1 00:06:03.682 00:06:03.682 ' 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.682 --rc genhtml_branch_coverage=1 00:06:03.682 --rc genhtml_function_coverage=1 00:06:03.682 --rc genhtml_legend=1 00:06:03.682 --rc geninfo_all_blocks=1 00:06:03.682 --rc geninfo_unexecuted_blocks=1 00:06:03.682 00:06:03.682 ' 00:06:03.682 12:32:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:03.682 12:32:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61367 00:06:03.682 12:32:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61367 00:06:03.682 12:32:03 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61367 ']' 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.682 12:32:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:03.682 [2024-12-14 12:32:03.287597] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:03.682 [2024-12-14 12:32:03.287830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61367 ] 00:06:03.939 [2024-12-14 12:32:03.439198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.940 [2024-12-14 12:32:03.537929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.504 12:32:04 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.504 12:32:04 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:04.504 12:32:04 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:04.762 { 00:06:04.762 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:04.762 "fields": { 00:06:04.762 "major": 25, 00:06:04.762 "minor": 1, 00:06:04.762 "patch": 0, 00:06:04.762 "suffix": "-pre", 00:06:04.762 "commit": "e01cb43b8" 00:06:04.762 } 00:06:04.762 } 00:06:04.762 12:32:04 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:04.762 12:32:04 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:04.762 12:32:04 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:04.762 12:32:04 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:04.762 12:32:04 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:04.762 12:32:04 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:04.762 12:32:04 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.762 12:32:04 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:04.762 12:32:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.762 12:32:04 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.762 12:32:04 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:04.762 12:32:04 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:04.762 12:32:04 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:04.762 12:32:04 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:04.762 12:32:04 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:04.762 12:32:04 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:04.762 12:32:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.762 12:32:04 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:04.762 12:32:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.762 12:32:04 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:04.762 12:32:04 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.762 12:32:04 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:04.762 12:32:04 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:04.762 12:32:04 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:05.019 request: 00:06:05.019 { 00:06:05.019 "method": "env_dpdk_get_mem_stats", 00:06:05.019 "req_id": 1 00:06:05.019 } 00:06:05.019 Got JSON-RPC error response 00:06:05.019 response: 00:06:05.019 { 00:06:05.019 "code": -32601, 00:06:05.019 "message": "Method not found" 00:06:05.019 } 00:06:05.019 12:32:04 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:05.019 12:32:04 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.019 12:32:04 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.019 12:32:04 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.019 12:32:04 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61367 00:06:05.019 12:32:04 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61367 ']' 00:06:05.019 12:32:04 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61367 00:06:05.019 12:32:04 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:05.019 12:32:04 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.019 12:32:04 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61367 00:06:05.019 killing process with pid 61367 00:06:05.019 12:32:04 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.019 12:32:04 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.019 12:32:04 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61367' 00:06:05.019 12:32:04 app_cmdline -- common/autotest_common.sh@973 -- # kill 61367 00:06:05.019 12:32:04 app_cmdline -- common/autotest_common.sh@978 -- # wait 61367 00:06:06.917 00:06:06.917 real 0m3.050s 00:06:06.917 user 0m3.356s 00:06:06.917 sys 0m0.414s 00:06:06.917 12:32:06 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.917 12:32:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.917 ************************************ 00:06:06.917 END TEST app_cmdline 00:06:06.917 ************************************ 00:06:06.917 12:32:06 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:06.917 12:32:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.917 12:32:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.917 12:32:06 -- common/autotest_common.sh@10 -- # set +x 00:06:06.917 ************************************ 00:06:06.917 START TEST version 00:06:06.917 ************************************ 00:06:06.917 12:32:06 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:06.917 * Looking for test storage... 00:06:06.917 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:06.917 12:32:06 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.917 12:32:06 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.917 12:32:06 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.917 12:32:06 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.917 12:32:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.917 12:32:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.917 12:32:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.917 12:32:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.917 12:32:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.917 12:32:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.917 12:32:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.917 12:32:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.917 12:32:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.917 12:32:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.917 12:32:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.917 12:32:06 version -- scripts/common.sh@344 -- # case "$op" in 00:06:06.917 12:32:06 version -- scripts/common.sh@345 -- # : 1 00:06:06.917 12:32:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.917 12:32:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.917 12:32:06 version -- scripts/common.sh@365 -- # decimal 1 00:06:06.917 12:32:06 version -- scripts/common.sh@353 -- # local d=1 00:06:06.917 12:32:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.917 12:32:06 version -- scripts/common.sh@355 -- # echo 1 00:06:06.917 12:32:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.917 12:32:06 version -- scripts/common.sh@366 -- # decimal 2 00:06:06.917 12:32:06 version -- scripts/common.sh@353 -- # local d=2 00:06:06.917 12:32:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.917 12:32:06 version -- scripts/common.sh@355 -- # echo 2 00:06:06.917 12:32:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.917 12:32:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.917 12:32:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.917 12:32:06 version -- scripts/common.sh@368 -- # return 0 00:06:06.917 12:32:06 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.917 12:32:06 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.917 --rc genhtml_branch_coverage=1 00:06:06.917 --rc genhtml_function_coverage=1 00:06:06.917 --rc genhtml_legend=1 00:06:06.917 --rc geninfo_all_blocks=1 00:06:06.917 --rc geninfo_unexecuted_blocks=1 00:06:06.917 00:06:06.917 ' 00:06:06.917 12:32:06 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.917 --rc genhtml_branch_coverage=1 00:06:06.917 --rc genhtml_function_coverage=1 00:06:06.917 --rc genhtml_legend=1 00:06:06.917 --rc geninfo_all_blocks=1 00:06:06.917 --rc geninfo_unexecuted_blocks=1 00:06:06.917 00:06:06.917 ' 00:06:06.917 12:32:06 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.917 --rc genhtml_branch_coverage=1 00:06:06.917 --rc genhtml_function_coverage=1 00:06:06.917 --rc genhtml_legend=1 00:06:06.917 --rc geninfo_all_blocks=1 00:06:06.917 --rc geninfo_unexecuted_blocks=1 00:06:06.917 00:06:06.917 ' 00:06:06.917 12:32:06 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.917 --rc genhtml_branch_coverage=1 00:06:06.917 --rc genhtml_function_coverage=1 00:06:06.917 --rc genhtml_legend=1 00:06:06.917 --rc geninfo_all_blocks=1 00:06:06.917 --rc geninfo_unexecuted_blocks=1 00:06:06.917 00:06:06.917 ' 00:06:06.917 12:32:06 version -- app/version.sh@17 -- # get_header_version major 00:06:06.917 12:32:06 version -- app/version.sh@14 -- # cut -f2 00:06:06.917 12:32:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.917 12:32:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.917 12:32:06 version -- app/version.sh@17 -- # major=25 00:06:06.917 12:32:06 version -- app/version.sh@18 -- # get_header_version minor 00:06:06.917 12:32:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.917 12:32:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.917 12:32:06 version -- app/version.sh@14 -- # cut -f2 00:06:06.917 12:32:06 version -- app/version.sh@18 -- # minor=1 00:06:06.917 12:32:06 version -- app/version.sh@19 -- # get_header_version patch 00:06:06.917 12:32:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.917 12:32:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.917 12:32:06 version -- app/version.sh@14 -- # cut -f2 00:06:06.917 12:32:06 version -- app/version.sh@19 -- # patch=0 00:06:06.917 12:32:06 version -- app/version.sh@20 -- # get_header_version suffix 00:06:06.917 12:32:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:06.917 12:32:06 version -- app/version.sh@14 -- # cut -f2 00:06:06.917 12:32:06 version -- app/version.sh@14 -- # tr -d '"' 00:06:06.917 12:32:06 version -- app/version.sh@20 -- # suffix=-pre 00:06:06.917 12:32:06 version -- app/version.sh@22 -- # version=25.1 00:06:06.917 12:32:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:06.917 12:32:06 version -- app/version.sh@28 -- # version=25.1rc0 00:06:06.918 12:32:06 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:06.918 12:32:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:06.918 12:32:06 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:06.918 12:32:06 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:06.918 ************************************ 00:06:06.918 END TEST version 00:06:06.918 ************************************ 00:06:06.918 00:06:06.918 real 0m0.180s 00:06:06.918 user 0m0.108s 00:06:06.918 sys 0m0.100s 00:06:06.918 12:32:06 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.918 12:32:06 version -- common/autotest_common.sh@10 -- # set +x 00:06:06.918 12:32:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:06.918 12:32:06 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:06.918 12:32:06 -- spdk/autotest.sh@194 -- # uname -s 00:06:06.918 12:32:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:06.918 12:32:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:06.918 12:32:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:06.918 12:32:06 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:06.918 12:32:06 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:06.918 12:32:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:06.918 12:32:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.918 12:32:06 -- common/autotest_common.sh@10 -- # set +x 00:06:06.918 ************************************ 00:06:06.918 START TEST blockdev_nvme 00:06:06.918 ************************************ 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:06.918 * Looking for test storage... 00:06:06.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.918 12:32:06 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.918 --rc genhtml_branch_coverage=1 00:06:06.918 --rc genhtml_function_coverage=1 00:06:06.918 --rc genhtml_legend=1 00:06:06.918 --rc geninfo_all_blocks=1 00:06:06.918 --rc geninfo_unexecuted_blocks=1 00:06:06.918 00:06:06.918 ' 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.918 --rc genhtml_branch_coverage=1 00:06:06.918 --rc genhtml_function_coverage=1 00:06:06.918 --rc genhtml_legend=1 00:06:06.918 --rc geninfo_all_blocks=1 00:06:06.918 --rc geninfo_unexecuted_blocks=1 00:06:06.918 00:06:06.918 ' 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.918 --rc genhtml_branch_coverage=1 00:06:06.918 --rc genhtml_function_coverage=1 00:06:06.918 --rc genhtml_legend=1 00:06:06.918 --rc geninfo_all_blocks=1 00:06:06.918 --rc geninfo_unexecuted_blocks=1 00:06:06.918 00:06:06.918 ' 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.918 --rc genhtml_branch_coverage=1 00:06:06.918 --rc genhtml_function_coverage=1 00:06:06.918 --rc genhtml_legend=1 00:06:06.918 --rc geninfo_all_blocks=1 00:06:06.918 --rc geninfo_unexecuted_blocks=1 00:06:06.918 00:06:06.918 ' 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:06.918 12:32:06 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61544 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61544 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61544 ']' 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.918 12:32:06 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.918 12:32:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.918 [2024-12-14 12:32:06.636842] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:06.918 [2024-12-14 12:32:06.637134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61544 ] 00:06:07.176 [2024-12-14 12:32:06.800250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.176 [2024-12-14 12:32:06.913362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.110 12:32:07 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.110 12:32:07 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:08.110 12:32:07 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:08.110 12:32:07 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:08.110 12:32:07 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:08.110 12:32:07 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:08.110 12:32:07 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:08.110 12:32:07 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:08.110 12:32:07 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.110 12:32:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.369 12:32:07 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.369 12:32:07 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:08.369 12:32:07 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.369 12:32:07 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.369 12:32:07 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.369 12:32:07 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:08.369 12:32:07 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:08.369 12:32:07 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.369 12:32:07 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.369 12:32:07 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:08.369 12:32:07 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:08.370 12:32:07 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "33b62c82-da5b-4e0a-ba7c-3e7c237455b8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "33b62c82-da5b-4e0a-ba7c-3e7c237455b8",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d47c189b-0cca-41b3-9514-c7e1eefe3614"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d47c189b-0cca-41b3-9514-c7e1eefe3614",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3d391be7-279e-4d45-a1d8-9aa3fc5ea392"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3d391be7-279e-4d45-a1d8-9aa3fc5ea392",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8941b0be-969d-44db-ab02-3d779e089113"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8941b0be-969d-44db-ab02-3d779e089113",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a9f3f420-1a37-48b9-9502-2dbd7b3b718c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a9f3f420-1a37-48b9-9502-2dbd7b3b718c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "7174d373-4dd2-4131-af06-1acd021ebceb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7174d373-4dd2-4131-af06-1acd021ebceb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:08.370 12:32:08 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:08.370 12:32:08 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:08.370 12:32:08 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:08.370 12:32:08 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61544 00:06:08.370 12:32:08 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61544 ']' 00:06:08.370 12:32:08 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61544 00:06:08.370 12:32:08 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:08.370 12:32:08 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.370 12:32:08 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61544 00:06:08.370 killing process with pid 61544 00:06:08.370 12:32:08 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.370 12:32:08 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.370 12:32:08 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61544' 00:06:08.370 12:32:08 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61544 00:06:08.370 12:32:08 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61544 00:06:10.266 12:32:09 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:10.266 12:32:09 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:10.266 12:32:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:10.266 12:32:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.266 12:32:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.266 ************************************ 00:06:10.266 START TEST bdev_hello_world 00:06:10.266 ************************************ 00:06:10.266 12:32:09 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:10.266 [2024-12-14 12:32:09.638896] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:10.266 [2024-12-14 12:32:09.639017] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61627 ] 00:06:10.266 [2024-12-14 12:32:09.800658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.266 [2024-12-14 12:32:09.898534] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.831 [2024-12-14 12:32:10.436232] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:10.831 [2024-12-14 12:32:10.436281] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:10.831 [2024-12-14 12:32:10.436301] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:10.831 [2024-12-14 12:32:10.438762] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:10.831 [2024-12-14 12:32:10.439095] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:10.831 [2024-12-14 12:32:10.439120] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:10.831 [2024-12-14 12:32:10.439498] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:10.831 00:06:10.831 [2024-12-14 12:32:10.439516] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:11.765 00:06:11.765 real 0m1.588s 00:06:11.765 user 0m1.305s 00:06:11.765 sys 0m0.177s 00:06:11.765 ************************************ 00:06:11.765 END TEST bdev_hello_world 00:06:11.765 ************************************ 00:06:11.765 12:32:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.765 12:32:11 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:11.765 12:32:11 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:11.765 12:32:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:11.765 12:32:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.765 12:32:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.765 ************************************ 00:06:11.765 START TEST bdev_bounds 00:06:11.765 ************************************ 00:06:11.765 12:32:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:11.765 Process bdevio pid: 61659 00:06:11.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.765 12:32:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61659 00:06:11.765 12:32:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.765 12:32:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61659' 00:06:11.765 12:32:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61659 00:06:11.765 12:32:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61659 ']' 00:06:11.765 12:32:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:11.765 12:32:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.765 12:32:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.765 12:32:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.765 12:32:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.765 12:32:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:11.765 [2024-12-14 12:32:11.270261] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:11.765 [2024-12-14 12:32:11.270391] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61659 ] 00:06:11.765 [2024-12-14 12:32:11.430334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.022 [2024-12-14 12:32:11.529975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.022 [2024-12-14 12:32:11.530239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.022 [2024-12-14 12:32:11.530246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.587 12:32:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.587 12:32:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:12.587 12:32:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:12.587 I/O targets: 00:06:12.587 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:12.587 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:12.587 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:12.587 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:12.587 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:12.587 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:12.587 00:06:12.587 00:06:12.587 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.587 http://cunit.sourceforge.net/ 00:06:12.587 00:06:12.587 00:06:12.587 Suite: bdevio tests on: Nvme3n1 00:06:12.587 Test: blockdev write read block ...passed 00:06:12.587 Test: blockdev write zeroes read block ...passed 00:06:12.587 Test: blockdev write zeroes read no split ...passed 00:06:12.587 Test: blockdev write zeroes read split ...passed 00:06:12.587 Test: blockdev write zeroes read split partial ...passed 00:06:12.587 Test: blockdev reset ...[2024-12-14 12:32:12.247495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:12.587 passed 00:06:12.587 Test: blockdev write read 8 blocks ...[2024-12-14 12:32:12.250353] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:12.587 passed 00:06:12.587 Test: blockdev write read size > 128k ...passed 00:06:12.587 Test: blockdev write read invalid size ...passed 00:06:12.587 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.587 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.587 Test: blockdev write read max offset ...passed 00:06:12.587 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.587 Test: blockdev writev readv 8 blocks ...passed 00:06:12.587 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.587 Test: blockdev writev readv block ...passed 00:06:12.587 Test: blockdev writev readv size > 128k ...passed 00:06:12.587 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.587 Test: blockdev comparev and writev ...[2024-12-14 12:32:12.257849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2afc0a000 len:0x1000 00:06:12.587 [2024-12-14 12:32:12.257896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.587 passed 00:06:12.587 Test: blockdev nvme passthru rw ...passed 00:06:12.587 Test: blockdev nvme passthru vendor specific ...passed 00:06:12.587 Test: blockdev nvme admin passthru ...[2024-12-14 12:32:12.258692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:12.587 [2024-12-14 12:32:12.258728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.587 passed 00:06:12.587 Test: blockdev copy ...passed 00:06:12.587 Suite: bdevio tests on: Nvme2n3 00:06:12.587 Test: blockdev write read block ...passed 00:06:12.587 Test: blockdev write zeroes read block ...passed 00:06:12.587 Test: blockdev write zeroes read no split ...passed 00:06:12.587 Test: blockdev write zeroes read split ...passed 00:06:12.587 Test: blockdev write zeroes read split partial ...passed 00:06:12.587 Test: blockdev reset ...[2024-12-14 12:32:12.317752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:12.587 passed 00:06:12.587 Test: blockdev write read 8 blocks ...[2024-12-14 12:32:12.320780] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:12.587 passed 00:06:12.587 Test: blockdev write read size > 128k ...passed 00:06:12.587 Test: blockdev write read invalid size ...passed 00:06:12.587 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.587 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.587 Test: blockdev write read max offset ...passed 00:06:12.844 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.845 Test: blockdev writev readv 8 blocks ...passed 00:06:12.845 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.845 Test: blockdev writev readv block ...passed 00:06:12.845 Test: blockdev writev readv size > 128k ...passed 00:06:12.845 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.845 Test: blockdev comparev and writev ...[2024-12-14 12:32:12.327238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x292e06000 len:0x1000 00:06:12.845 [2024-12-14 12:32:12.327276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.845 passed 00:06:12.845 Test: blockdev nvme passthru rw ...passed 00:06:12.845 Test: blockdev nvme passthru vendor specific ...passed 00:06:12.845 Test: blockdev nvme admin passthru ...[2024-12-14 12:32:12.327811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:12.845 [2024-12-14 12:32:12.327835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.845 passed 00:06:12.845 Test: blockdev copy ...passed 00:06:12.845 Suite: bdevio tests on: Nvme2n2 00:06:12.845 Test: blockdev write read block ...passed 00:06:12.845 Test: blockdev write zeroes read block ...passed 00:06:12.845 Test: blockdev write zeroes read no split ...passed 00:06:12.845 Test: blockdev write zeroes read split ...passed 00:06:12.845 Test: blockdev write zeroes read split partial ...passed 00:06:12.845 Test: blockdev reset ...[2024-12-14 12:32:12.372293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:12.845 [2024-12-14 12:32:12.376583] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:12.845 Test: blockdev write read 8 blocks ...uccessful. 00:06:12.845 passed 00:06:12.845 Test: blockdev write read size > 128k ...passed 00:06:12.845 Test: blockdev write read invalid size ...passed 00:06:12.845 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.845 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.845 Test: blockdev write read max offset ...passed 00:06:12.845 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.845 Test: blockdev writev readv 8 blocks ...passed 00:06:12.845 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.845 Test: blockdev writev readv block ...passed 00:06:12.845 Test: blockdev writev readv size > 128k ...passed 00:06:12.845 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.845 Test: blockdev comparev and writev ...[2024-12-14 12:32:12.387310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d223c000 len:0x1000 00:06:12.845 [2024-12-14 12:32:12.387451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.845 passed 00:06:12.845 Test: blockdev nvme passthru rw ...passed 00:06:12.845 Test: blockdev nvme passthru vendor specific ...passed 00:06:12.845 Test: blockdev nvme admin passthru ...[2024-12-14 12:32:12.389028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:12.845 [2024-12-14 12:32:12.389072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.845 passed 00:06:12.845 Test: blockdev copy ...passed 00:06:12.845 Suite: bdevio tests on: Nvme2n1 00:06:12.845 Test: blockdev write read block ...passed 00:06:12.845 Test: blockdev write zeroes read block ...passed 00:06:12.845 Test: blockdev write zeroes read no split ...passed 00:06:12.845 Test: blockdev write zeroes read split ...passed 00:06:12.845 Test: blockdev write zeroes read split partial ...passed 00:06:12.845 Test: blockdev reset ...[2024-12-14 12:32:12.444040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:12.845 [2024-12-14 12:32:12.447657] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:12.845 Test: blockdev write read 8 blocks ...uccessful. 00:06:12.845 passed 00:06:12.845 Test: blockdev write read size > 128k ...passed 00:06:12.845 Test: blockdev write read invalid size ...passed 00:06:12.845 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.845 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.845 Test: blockdev write read max offset ...passed 00:06:12.845 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.845 Test: blockdev writev readv 8 blocks ...passed 00:06:12.845 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.845 Test: blockdev writev readv block ...passed 00:06:12.845 Test: blockdev writev readv size > 128k ...passed 00:06:12.845 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.845 Test: blockdev comparev and writev ...[2024-12-14 12:32:12.458082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2238000 len:0x1000 00:06:12.845 [2024-12-14 12:32:12.458207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.845 passed 00:06:12.845 Test: blockdev nvme passthru rw ...passed 00:06:12.845 Test: blockdev nvme passthru vendor specific ...passed 00:06:12.845 Test: blockdev nvme admin passthru ...[2024-12-14 12:32:12.459681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:12.845 [2024-12-14 12:32:12.459712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.845 passed 00:06:12.845 Test: blockdev copy ...passed 00:06:12.845 Suite: bdevio tests on: Nvme1n1 00:06:12.845 Test: blockdev write read block ...passed 00:06:12.845 Test: blockdev write zeroes read block ...passed 00:06:12.845 Test: blockdev write zeroes read no split ...passed 00:06:12.845 Test: blockdev write zeroes read split ...passed 00:06:12.845 Test: blockdev write zeroes read split partial ...passed 00:06:12.845 Test: blockdev reset ...[2024-12-14 12:32:12.515189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:12.845 [2024-12-14 12:32:12.518237] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spasseduccessful. 00:06:12.845 00:06:12.845 Test: blockdev write read 8 blocks ...passed 00:06:12.845 Test: blockdev write read size > 128k ...passed 00:06:12.845 Test: blockdev write read invalid size ...passed 00:06:12.845 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.845 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.845 Test: blockdev write read max offset ...passed 00:06:12.845 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.845 Test: blockdev writev readv 8 blocks ...passed 00:06:12.845 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.845 Test: blockdev writev readv block ...passed 00:06:12.845 Test: blockdev writev readv size > 128k ...passed 00:06:12.845 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.845 Test: blockdev comparev and writev ...[2024-12-14 12:32:12.532189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:12.845 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2d2234000 len:0x1000 00:06:12.845 [2024-12-14 12:32:12.532330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:12.845 passed 00:06:12.845 Test: blockdev nvme passthru vendor specific ...passed 00:06:12.845 Test: blockdev nvme admin passthru ...[2024-12-14 12:32:12.534018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:12.845 [2024-12-14 12:32:12.534051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.845 passed 00:06:12.845 Test: blockdev copy ...passed 00:06:12.845 Suite: bdevio tests on: Nvme0n1 00:06:12.845 Test: blockdev write read block ...passed 00:06:12.845 Test: blockdev write zeroes read block ...passed 00:06:12.845 Test: blockdev write zeroes read no split ...passed 00:06:12.845 Test: blockdev write zeroes read split ...passed 00:06:13.103 Test: blockdev write zeroes read split partial ...passed 00:06:13.103 Test: blockdev reset ...[2024-12-14 12:32:12.584714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:13.103 [2024-12-14 12:32:12.588426] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spasseduccessful. 00:06:13.103 00:06:13.103 Test: blockdev write read 8 blocks ...passed 00:06:13.103 Test: blockdev write read size > 128k ...passed 00:06:13.103 Test: blockdev write read invalid size ...passed 00:06:13.103 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:13.103 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:13.103 Test: blockdev write read max offset ...passed 00:06:13.103 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:13.103 Test: blockdev writev readv 8 blocks ...passed 00:06:13.103 Test: blockdev writev readv 30 x 1block ...passed 00:06:13.103 Test: blockdev writev readv block ...passed 00:06:13.103 Test: blockdev writev readv size > 128k ...passed 00:06:13.103 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:13.103 Test: blockdev comparev and writev ...[2024-12-14 12:32:12.597664] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:13.103 separate metadata which is not supported yet. 00:06:13.103 passed 00:06:13.103 Test: blockdev nvme passthru rw ...passed 00:06:13.103 Test: blockdev nvme passthru vendor specific ...[2024-12-14 12:32:12.598688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:06:13.103 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:13.103 [2024-12-14 12:32:12.598927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:13.103 passed 00:06:13.103 Test: blockdev copy ...passed 00:06:13.103 00:06:13.103 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.103 suites 6 6 n/a 0 0 00:06:13.103 tests 138 138 138 0 0 00:06:13.103 asserts 893 893 893 0 n/a 00:06:13.103 00:06:13.103 Elapsed time = 1.056 seconds 00:06:13.103 0 00:06:13.103 12:32:12 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61659 00:06:13.103 12:32:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61659 ']' 00:06:13.103 12:32:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61659 00:06:13.103 12:32:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:13.103 12:32:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.103 12:32:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61659 00:06:13.103 12:32:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.103 12:32:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.103 killing process with pid 61659 00:06:13.103 12:32:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61659' 00:06:13.103 12:32:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61659 00:06:13.103 12:32:12 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61659 00:06:13.668 ************************************ 00:06:13.668 END TEST bdev_bounds 00:06:13.668 ************************************ 00:06:13.668 12:32:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:13.668 00:06:13.668 real 0m2.128s 00:06:13.668 user 0m5.409s 00:06:13.668 sys 0m0.288s 00:06:13.668 12:32:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.668 12:32:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:13.668 12:32:13 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:13.668 12:32:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:13.668 12:32:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.668 12:32:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.669 ************************************ 00:06:13.669 START TEST bdev_nbd 00:06:13.669 ************************************ 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61719 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61719 /var/tmp/spdk-nbd.sock 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61719 ']' 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.669 12:32:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:13.927 [2024-12-14 12:32:13.457167] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:13.927 [2024-12-14 12:32:13.457417] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.927 [2024-12-14 12:32:13.619246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.185 [2024-12-14 12:32:13.721156] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.751 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.751 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:14.751 12:32:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:14.751 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.751 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.751 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:14.751 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:14.751 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.751 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:14.752 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:14.752 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:14.752 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:14.752 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:14.752 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:14.752 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.010 1+0 records in 00:06:15.010 1+0 records out 00:06:15.010 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640005 s, 6.4 MB/s 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.010 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.268 1+0 records in 00:06:15.268 1+0 records out 00:06:15.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512345 s, 8.0 MB/s 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.268 12:32:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:15.268 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.527 1+0 records in 00:06:15.527 1+0 records out 00:06:15.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000999016 s, 4.1 MB/s 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.527 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.528 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:15.528 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.528 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.528 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.528 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.528 1+0 records in 00:06:15.528 1+0 records out 00:06:15.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111397 s, 3.7 MB/s 00:06:15.528 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.528 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.528 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.786 1+0 records in 00:06:15.786 1+0 records out 00:06:15.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000816828 s, 5.0 MB/s 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.786 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:16.044 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:16.044 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:16.044 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:16.044 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:16.044 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:16.044 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.044 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.044 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:16.044 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:16.044 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.044 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.044 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:16.044 1+0 records in 00:06:16.044 1+0 records out 00:06:16.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127119 s, 3.2 MB/s 00:06:16.044 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.044 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:16.045 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:16.045 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.045 12:32:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:16.045 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:16.045 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:16.045 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.303 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:16.303 { 00:06:16.303 "nbd_device": "/dev/nbd0", 00:06:16.303 "bdev_name": "Nvme0n1" 00:06:16.303 }, 00:06:16.303 { 00:06:16.303 "nbd_device": "/dev/nbd1", 00:06:16.303 "bdev_name": "Nvme1n1" 00:06:16.303 }, 00:06:16.303 { 00:06:16.303 "nbd_device": "/dev/nbd2", 00:06:16.303 "bdev_name": "Nvme2n1" 00:06:16.303 }, 00:06:16.303 { 00:06:16.303 "nbd_device": "/dev/nbd3", 00:06:16.303 "bdev_name": "Nvme2n2" 00:06:16.303 }, 00:06:16.303 { 00:06:16.303 "nbd_device": "/dev/nbd4", 00:06:16.303 "bdev_name": "Nvme2n3" 00:06:16.303 }, 00:06:16.303 { 00:06:16.303 "nbd_device": "/dev/nbd5", 00:06:16.303 "bdev_name": "Nvme3n1" 00:06:16.303 } 00:06:16.303 ]' 00:06:16.303 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:16.303 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:16.303 { 00:06:16.303 "nbd_device": "/dev/nbd0", 00:06:16.303 "bdev_name": "Nvme0n1" 00:06:16.303 }, 00:06:16.303 { 00:06:16.303 "nbd_device": "/dev/nbd1", 00:06:16.303 "bdev_name": "Nvme1n1" 00:06:16.303 }, 00:06:16.303 { 00:06:16.303 "nbd_device": "/dev/nbd2", 00:06:16.303 "bdev_name": "Nvme2n1" 00:06:16.303 }, 00:06:16.303 { 00:06:16.303 "nbd_device": "/dev/nbd3", 00:06:16.303 "bdev_name": "Nvme2n2" 00:06:16.303 }, 00:06:16.303 { 00:06:16.303 "nbd_device": "/dev/nbd4", 00:06:16.303 "bdev_name": "Nvme2n3" 00:06:16.303 }, 00:06:16.303 { 00:06:16.303 "nbd_device": "/dev/nbd5", 00:06:16.303 "bdev_name": "Nvme3n1" 00:06:16.303 } 00:06:16.303 ]' 00:06:16.303 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:16.303 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:16.303 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.303 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:16.303 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.303 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:16.303 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.303 12:32:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.561 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.561 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.561 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.561 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.561 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.561 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.561 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.561 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.561 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.561 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.818 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.819 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.819 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.819 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.819 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.819 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.819 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.819 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.819 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.819 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.076 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:17.334 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:17.334 12:32:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:17.334 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:17.334 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.334 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.334 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:17.334 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.334 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.334 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.334 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:17.593 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:17.593 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:17.593 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:17.593 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.593 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.593 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:17.593 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.593 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.593 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.593 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.593 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:17.851 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:18.110 /dev/nbd0 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.110 1+0 records in 00:06:18.110 1+0 records out 00:06:18.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000839307 s, 4.9 MB/s 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.110 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:18.369 /dev/nbd1 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.369 1+0 records in 00:06:18.369 1+0 records out 00:06:18.369 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435427 s, 9.4 MB/s 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.369 12:32:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:18.627 /dev/nbd10 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.628 1+0 records in 00:06:18.628 1+0 records out 00:06:18.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466296 s, 8.8 MB/s 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.628 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:18.886 /dev/nbd11 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.886 1+0 records in 00:06:18.886 1+0 records out 00:06:18.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614018 s, 6.7 MB/s 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.886 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:19.144 /dev/nbd12 00:06:19.144 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:19.144 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:19.144 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:19.144 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.144 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.144 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.144 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:19.144 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.144 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.144 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.144 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.144 1+0 records in 00:06:19.144 1+0 records out 00:06:19.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598587 s, 6.8 MB/s 00:06:19.144 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.144 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.145 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.145 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.145 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.145 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.145 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.145 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:19.145 /dev/nbd13 00:06:19.145 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.403 1+0 records in 00:06:19.403 1+0 records out 00:06:19.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000509789 s, 8.0 MB/s 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.403 12:32:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.403 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.403 { 00:06:19.403 "nbd_device": "/dev/nbd0", 00:06:19.403 "bdev_name": "Nvme0n1" 00:06:19.403 }, 00:06:19.403 { 00:06:19.403 "nbd_device": "/dev/nbd1", 00:06:19.403 "bdev_name": "Nvme1n1" 00:06:19.403 }, 00:06:19.403 { 00:06:19.403 "nbd_device": "/dev/nbd10", 00:06:19.403 "bdev_name": "Nvme2n1" 00:06:19.403 }, 00:06:19.403 { 00:06:19.403 "nbd_device": "/dev/nbd11", 00:06:19.403 "bdev_name": "Nvme2n2" 00:06:19.403 }, 00:06:19.403 { 00:06:19.403 "nbd_device": "/dev/nbd12", 00:06:19.403 "bdev_name": "Nvme2n3" 00:06:19.403 }, 00:06:19.403 { 00:06:19.403 "nbd_device": "/dev/nbd13", 00:06:19.403 "bdev_name": "Nvme3n1" 00:06:19.403 } 00:06:19.403 ]' 00:06:19.403 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.403 { 00:06:19.403 "nbd_device": "/dev/nbd0", 00:06:19.403 "bdev_name": "Nvme0n1" 00:06:19.403 }, 00:06:19.403 { 00:06:19.403 "nbd_device": "/dev/nbd1", 00:06:19.403 "bdev_name": "Nvme1n1" 00:06:19.403 }, 00:06:19.403 { 00:06:19.403 "nbd_device": "/dev/nbd10", 00:06:19.403 "bdev_name": "Nvme2n1" 00:06:19.403 }, 00:06:19.403 { 00:06:19.403 "nbd_device": "/dev/nbd11", 00:06:19.403 "bdev_name": "Nvme2n2" 00:06:19.403 }, 00:06:19.403 { 00:06:19.403 "nbd_device": "/dev/nbd12", 00:06:19.403 "bdev_name": "Nvme2n3" 00:06:19.403 }, 00:06:19.403 { 00:06:19.403 "nbd_device": "/dev/nbd13", 00:06:19.404 "bdev_name": "Nvme3n1" 00:06:19.404 } 00:06:19.404 ]' 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.404 /dev/nbd1 00:06:19.404 /dev/nbd10 00:06:19.404 /dev/nbd11 00:06:19.404 /dev/nbd12 00:06:19.404 /dev/nbd13' 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.404 /dev/nbd1 00:06:19.404 /dev/nbd10 00:06:19.404 /dev/nbd11 00:06:19.404 /dev/nbd12 00:06:19.404 /dev/nbd13' 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:19.404 256+0 records in 00:06:19.404 256+0 records out 00:06:19.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00725563 s, 145 MB/s 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.404 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.689 256+0 records in 00:06:19.689 256+0 records out 00:06:19.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0561163 s, 18.7 MB/s 00:06:19.689 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.689 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.689 256+0 records in 00:06:19.689 256+0 records out 00:06:19.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0530491 s, 19.8 MB/s 00:06:19.689 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.689 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:19.689 256+0 records in 00:06:19.689 256+0 records out 00:06:19.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0547939 s, 19.1 MB/s 00:06:19.689 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.689 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:19.689 256+0 records in 00:06:19.689 256+0 records out 00:06:19.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.051822 s, 20.2 MB/s 00:06:19.689 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.689 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:19.689 256+0 records in 00:06:19.689 256+0 records out 00:06:19.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0517283 s, 20.3 MB/s 00:06:19.689 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.689 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:19.962 256+0 records in 00:06:19.962 256+0 records out 00:06:19.962 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0535695 s, 19.6 MB/s 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.962 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.221 12:32:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:20.480 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:20.480 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:20.480 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:20.480 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.480 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.480 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:20.480 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.480 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.480 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.480 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:20.739 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:20.739 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:20.739 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:20.739 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.739 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.739 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:20.739 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.739 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.739 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.739 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:20.997 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:20.997 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:20.997 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:20.997 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.997 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.997 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:20.997 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.997 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.997 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.997 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:21.256 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:21.256 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:21.256 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:21.256 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.256 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.256 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:21.256 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.256 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.256 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.256 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.256 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.514 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.514 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.514 12:32:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.514 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.514 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.514 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.514 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:21.514 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.514 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.514 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.514 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.514 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.514 12:32:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:21.514 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.514 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:21.514 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:21.514 malloc_lvol_verify 00:06:21.773 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:21.773 2cec2747-ecdd-446f-a7f9-0d984a6b1470 00:06:21.773 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:22.030 c93402f3-84a9-47b6-a98e-5f76fa5939f4 00:06:22.030 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:22.289 /dev/nbd0 00:06:22.289 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:22.289 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:22.289 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:22.289 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:22.289 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:22.289 mke2fs 1.47.0 (5-Feb-2023) 00:06:22.289 Discarding device blocks: 0/4096 done 00:06:22.289 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:22.289 00:06:22.289 Allocating group tables: 0/1 done 00:06:22.289 Writing inode tables: 0/1 done 00:06:22.289 Creating journal (1024 blocks): done 00:06:22.289 Writing superblocks and filesystem accounting information: 0/1 done 00:06:22.289 00:06:22.289 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:22.289 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.289 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:22.289 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:22.289 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:22.289 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.289 12:32:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61719 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61719 ']' 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61719 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61719 00:06:22.547 killing process with pid 61719 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61719' 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61719 00:06:22.547 12:32:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61719 00:06:23.113 ************************************ 00:06:23.113 END TEST bdev_nbd 00:06:23.113 ************************************ 00:06:23.113 12:32:22 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:23.113 00:06:23.113 real 0m9.360s 00:06:23.113 user 0m13.586s 00:06:23.113 sys 0m3.028s 00:06:23.113 12:32:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.113 12:32:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:23.113 12:32:22 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:23.113 skipping fio tests on NVMe due to multi-ns failures. 00:06:23.113 12:32:22 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:23.113 12:32:22 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:23.113 12:32:22 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:23.113 12:32:22 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.113 12:32:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:23.113 12:32:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.113 12:32:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.113 ************************************ 00:06:23.113 START TEST bdev_verify 00:06:23.113 ************************************ 00:06:23.114 12:32:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:23.372 [2024-12-14 12:32:22.877634] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:23.372 [2024-12-14 12:32:22.877750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62086 ] 00:06:23.372 [2024-12-14 12:32:23.032973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.631 [2024-12-14 12:32:23.114041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.631 [2024-12-14 12:32:23.114070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.197 Running I/O for 5 seconds... 00:06:26.505 20736.00 IOPS, 81.00 MiB/s [2024-12-14T12:32:26.808Z] 20768.00 IOPS, 81.12 MiB/s [2024-12-14T12:32:28.181Z] 20501.33 IOPS, 80.08 MiB/s [2024-12-14T12:32:28.748Z] 23104.00 IOPS, 90.25 MiB/s [2024-12-14T12:32:29.006Z] 23116.80 IOPS, 90.30 MiB/s 00:06:29.269 Latency(us) 00:06:29.269 [2024-12-14T12:32:29.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:29.269 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.269 Verification LBA range: start 0x0 length 0xbd0bd 00:06:29.269 Nvme0n1 : 5.06 1911.74 7.47 0.00 0.00 66658.34 8872.57 74206.92 00:06:29.269 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.269 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:29.269 Nvme0n1 : 5.04 1903.71 7.44 0.00 0.00 67003.59 12300.60 76626.71 00:06:29.269 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.269 Verification LBA range: start 0x0 length 0xa0000 00:06:29.269 Nvme1n1 : 5.06 1910.74 7.46 0.00 0.00 66543.39 8519.68 70173.93 00:06:29.269 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.269 Verification LBA range: start 0xa0000 length 0xa0000 00:06:29.269 Nvme1n1 : 5.04 1903.17 7.43 0.00 0.00 66923.03 13913.80 73803.62 00:06:29.269 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.269 Verification LBA range: start 0x0 length 0x80000 00:06:29.269 Nvme2n1 : 5.08 1915.61 7.48 0.00 0.00 66374.57 15627.82 64527.75 00:06:29.269 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.269 Verification LBA range: start 0x80000 length 0x80000 00:06:29.269 Nvme2n1 : 5.05 1902.60 7.43 0.00 0.00 66807.56 14720.39 70173.93 00:06:29.269 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.269 Verification LBA range: start 0x0 length 0x80000 00:06:29.269 Nvme2n2 : 5.08 1914.70 7.48 0.00 0.00 66294.52 15627.82 65334.35 00:06:29.269 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.269 Verification LBA range: start 0x80000 length 0x80000 00:06:29.269 Nvme2n2 : 5.06 1908.61 7.46 0.00 0.00 66494.50 5646.18 71787.13 00:06:29.269 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.269 Verification LBA range: start 0x0 length 0x80000 00:06:29.269 Nvme2n3 : 5.08 1913.56 7.47 0.00 0.00 66194.93 13510.50 69770.63 00:06:29.269 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.269 Verification LBA range: start 0x80000 length 0x80000 00:06:29.269 Nvme2n3 : 5.08 1916.35 7.49 0.00 0.00 66185.14 10334.52 75013.51 00:06:29.269 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:29.269 Verification LBA range: start 0x0 length 0x20000 00:06:29.269 Nvme3n1 : 5.09 1913.06 7.47 0.00 0.00 66088.52 7864.32 74610.22 00:06:29.269 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:29.269 Verification LBA range: start 0x20000 length 0x20000 00:06:29.269 Nvme3n1 : 5.08 1915.82 7.48 0.00 0.00 66066.83 10485.76 76626.71 00:06:29.269 [2024-12-14T12:32:29.006Z] =================================================================================================================== 00:06:29.269 [2024-12-14T12:32:29.006Z] Total : 22929.65 89.57 0.00 0.00 66467.96 5646.18 76626.71 00:06:30.676 00:06:30.676 real 0m7.560s 00:06:30.676 user 0m14.251s 00:06:30.676 sys 0m0.209s 00:06:30.676 12:32:30 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.676 12:32:30 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:30.676 ************************************ 00:06:30.676 END TEST bdev_verify 00:06:30.676 ************************************ 00:06:30.933 12:32:30 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:30.933 12:32:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:30.933 12:32:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.933 12:32:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.933 ************************************ 00:06:30.933 START TEST bdev_verify_big_io 00:06:30.933 ************************************ 00:06:30.934 12:32:30 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:30.934 [2024-12-14 12:32:30.503889] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:30.934 [2024-12-14 12:32:30.504022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62184 ] 00:06:30.934 [2024-12-14 12:32:30.666615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.191 [2024-12-14 12:32:30.770314] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.191 [2024-12-14 12:32:30.770447] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.758 Running I/O for 5 seconds... 00:06:37.571 1323.00 IOPS, 82.69 MiB/s [2024-12-14T12:32:37.876Z] 3306.50 IOPS, 206.66 MiB/s 00:06:38.139 Latency(us) 00:06:38.139 [2024-12-14T12:32:37.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:38.139 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.139 Verification LBA range: start 0x0 length 0xbd0b 00:06:38.139 Nvme0n1 : 5.83 82.32 5.14 0.00 0.00 1489841.44 21778.12 1535760.54 00:06:38.139 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.139 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:38.139 Nvme0n1 : 5.45 169.43 10.59 0.00 0.00 727350.52 19055.85 955010.76 00:06:38.139 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.139 Verification LBA range: start 0x0 length 0xa000 00:06:38.139 Nvme1n1 : 5.87 87.19 5.45 0.00 0.00 1350015.02 38918.30 1264743.98 00:06:38.139 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.139 Verification LBA range: start 0xa000 length 0xa000 00:06:38.139 Nvme1n1 : 5.53 173.57 10.85 0.00 0.00 694575.73 80659.69 796917.76 00:06:38.139 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.139 Verification LBA range: start 0x0 length 0x8000 00:06:38.139 Nvme2n1 : 5.87 87.15 5.45 0.00 0.00 1279065.40 39321.60 1303460.63 00:06:38.139 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.139 Verification LBA range: start 0x8000 length 0x8000 00:06:38.139 Nvme2n1 : 5.60 172.92 10.81 0.00 0.00 678184.65 70173.93 896935.78 00:06:38.139 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.139 Verification LBA range: start 0x0 length 0x8000 00:06:38.139 Nvme2n2 : 5.98 102.15 6.38 0.00 0.00 1045102.01 21072.34 1406705.03 00:06:38.139 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.139 Verification LBA range: start 0x8000 length 0x8000 00:06:38.139 Nvme2n2 : 5.74 174.66 10.92 0.00 0.00 653481.63 69770.63 1264743.98 00:06:38.139 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.139 Verification LBA range: start 0x0 length 0x8000 00:06:38.139 Nvme2n3 : 6.05 126.88 7.93 0.00 0.00 816374.15 14115.45 1438968.91 00:06:38.139 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.139 Verification LBA range: start 0x8000 length 0x8000 00:06:38.139 Nvme2n3 : 5.79 190.10 11.88 0.00 0.00 587541.76 16837.71 961463.53 00:06:38.139 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:38.139 Verification LBA range: start 0x0 length 0x2000 00:06:38.139 Nvme3n1 : 6.29 244.28 15.27 0.00 0.00 406351.19 510.42 1484138.34 00:06:38.139 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:38.139 Verification LBA range: start 0x2000 length 0x2000 00:06:38.139 Nvme3n1 : 5.83 207.59 12.97 0.00 0.00 523337.95 444.26 780785.82 00:06:38.139 [2024-12-14T12:32:37.876Z] =================================================================================================================== 00:06:38.139 [2024-12-14T12:32:37.876Z] Total : 1818.24 113.64 0.00 0.00 744595.42 444.26 1535760.54 00:06:40.668 00:06:40.668 real 0m9.608s 00:06:40.668 user 0m18.239s 00:06:40.668 sys 0m0.256s 00:06:40.668 12:32:40 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.668 12:32:40 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:40.668 ************************************ 00:06:40.668 END TEST bdev_verify_big_io 00:06:40.668 ************************************ 00:06:40.668 12:32:40 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.668 12:32:40 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:40.668 12:32:40 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.668 12:32:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:40.668 ************************************ 00:06:40.668 START TEST bdev_write_zeroes 00:06:40.668 ************************************ 00:06:40.668 12:32:40 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:40.668 [2024-12-14 12:32:40.142353] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:40.668 [2024-12-14 12:32:40.142452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62299 ] 00:06:40.668 [2024-12-14 12:32:40.301118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.668 [2024-12-14 12:32:40.400966] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.603 Running I/O for 1 seconds... 00:06:42.594 24157.00 IOPS, 94.36 MiB/s 00:06:42.594 Latency(us) 00:06:42.594 [2024-12-14T12:32:42.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.594 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.594 Nvme0n1 : 1.02 3978.97 15.54 0.00 0.00 32108.71 7108.14 706578.90 00:06:42.594 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.594 Nvme1n1 : 1.02 4260.25 16.64 0.00 0.00 29963.05 7360.20 324251.96 00:06:42.594 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.594 Nvme2n1 : 1.02 4131.22 16.14 0.00 0.00 30825.16 7158.55 329091.54 00:06:42.594 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.594 Nvme2n2 : 1.02 4127.46 16.12 0.00 0.00 30805.81 7108.14 330704.74 00:06:42.594 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.594 Nvme2n3 : 1.02 4123.80 16.11 0.00 0.00 30722.55 7309.78 329091.54 00:06:42.594 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:42.594 Nvme3n1 : 1.03 4120.19 16.09 0.00 0.00 30724.27 6856.07 329091.54 00:06:42.594 [2024-12-14T12:32:42.331Z] =================================================================================================================== 00:06:42.594 [2024-12-14T12:32:42.331Z] Total : 24741.89 96.65 0.00 0.00 30845.68 6856.07 706578.90 00:06:43.161 00:06:43.161 real 0m2.543s 00:06:43.161 user 0m2.257s 00:06:43.161 sys 0m0.176s 00:06:43.161 12:32:42 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.161 12:32:42 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:43.161 ************************************ 00:06:43.161 END TEST bdev_write_zeroes 00:06:43.161 ************************************ 00:06:43.161 12:32:42 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:43.161 12:32:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:43.161 12:32:42 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.161 12:32:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.161 ************************************ 00:06:43.161 START TEST bdev_json_nonenclosed 00:06:43.161 ************************************ 00:06:43.161 12:32:42 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:43.161 [2024-12-14 12:32:42.747482] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:43.161 [2024-12-14 12:32:42.747583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62354 ] 00:06:43.161 [2024-12-14 12:32:42.895771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.420 [2024-12-14 12:32:42.999237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.420 [2024-12-14 12:32:42.999320] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:43.420 [2024-12-14 12:32:42.999336] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:43.420 [2024-12-14 12:32:42.999345] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.420 00:06:43.420 real 0m0.461s 00:06:43.420 user 0m0.269s 00:06:43.420 sys 0m0.089s 00:06:43.420 12:32:43 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.420 ************************************ 00:06:43.420 END TEST bdev_json_nonenclosed 00:06:43.420 ************************************ 00:06:43.420 12:32:43 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:43.681 12:32:43 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:43.681 12:32:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:43.681 12:32:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.681 12:32:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.681 ************************************ 00:06:43.681 START TEST bdev_json_nonarray 00:06:43.681 ************************************ 00:06:43.681 12:32:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:43.681 [2024-12-14 12:32:43.266915] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:43.681 [2024-12-14 12:32:43.267049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62374 ] 00:06:43.940 [2024-12-14 12:32:43.425656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.940 [2024-12-14 12:32:43.535952] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.940 [2024-12-14 12:32:43.536044] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:43.940 [2024-12-14 12:32:43.536074] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:43.940 [2024-12-14 12:32:43.536084] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.198 00:06:44.199 real 0m0.515s 00:06:44.199 user 0m0.315s 00:06:44.199 sys 0m0.097s 00:06:44.199 12:32:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.199 12:32:43 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:44.199 ************************************ 00:06:44.199 END TEST bdev_json_nonarray 00:06:44.199 ************************************ 00:06:44.199 12:32:43 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:44.199 12:32:43 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:44.199 12:32:43 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:44.199 12:32:43 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:44.199 12:32:43 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:44.199 12:32:43 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:44.199 12:32:43 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:44.199 12:32:43 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:44.199 12:32:43 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:44.199 12:32:43 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:44.199 12:32:43 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:44.199 00:06:44.199 real 0m37.351s 00:06:44.199 user 0m58.874s 00:06:44.199 sys 0m5.088s 00:06:44.199 ************************************ 00:06:44.199 END TEST blockdev_nvme 00:06:44.199 ************************************ 00:06:44.199 12:32:43 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.199 12:32:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.199 12:32:43 -- spdk/autotest.sh@209 -- # uname -s 00:06:44.199 12:32:43 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:44.199 12:32:43 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:44.199 12:32:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:44.199 12:32:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.199 12:32:43 -- common/autotest_common.sh@10 -- # set +x 00:06:44.199 ************************************ 00:06:44.199 START TEST blockdev_nvme_gpt 00:06:44.199 ************************************ 00:06:44.199 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:44.199 * Looking for test storage... 00:06:44.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:44.199 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:44.199 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:44.199 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:44.457 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.457 12:32:43 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:44.457 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.457 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:44.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.457 --rc genhtml_branch_coverage=1 00:06:44.457 --rc genhtml_function_coverage=1 00:06:44.457 --rc genhtml_legend=1 00:06:44.457 --rc geninfo_all_blocks=1 00:06:44.457 --rc geninfo_unexecuted_blocks=1 00:06:44.457 00:06:44.457 ' 00:06:44.457 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:44.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.457 --rc genhtml_branch_coverage=1 00:06:44.457 --rc genhtml_function_coverage=1 00:06:44.457 --rc genhtml_legend=1 00:06:44.457 --rc geninfo_all_blocks=1 00:06:44.457 --rc geninfo_unexecuted_blocks=1 00:06:44.457 00:06:44.457 ' 00:06:44.457 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:44.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.457 --rc genhtml_branch_coverage=1 00:06:44.457 --rc genhtml_function_coverage=1 00:06:44.457 --rc genhtml_legend=1 00:06:44.457 --rc geninfo_all_blocks=1 00:06:44.457 --rc geninfo_unexecuted_blocks=1 00:06:44.457 00:06:44.457 ' 00:06:44.457 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:44.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.457 --rc genhtml_branch_coverage=1 00:06:44.457 --rc genhtml_function_coverage=1 00:06:44.457 --rc genhtml_legend=1 00:06:44.457 --rc geninfo_all_blocks=1 00:06:44.457 --rc geninfo_unexecuted_blocks=1 00:06:44.457 00:06:44.457 ' 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62458 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:44.457 12:32:43 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62458 00:06:44.457 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62458 ']' 00:06:44.457 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.457 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.457 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.457 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.457 12:32:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:44.457 [2024-12-14 12:32:44.043769] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:44.457 [2024-12-14 12:32:44.043899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62458 ] 00:06:44.715 [2024-12-14 12:32:44.202926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.715 [2024-12-14 12:32:44.313502] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.281 12:32:44 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.281 12:32:44 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:45.281 12:32:44 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:45.281 12:32:44 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:45.281 12:32:44 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:45.545 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:45.803 Waiting for block devices as requested 00:06:45.803 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:45.803 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.060 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.060 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:51.348 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:51.348 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:51.348 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:51.348 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:51.348 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:51.348 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:51.348 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:51.348 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:51.348 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:51.348 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:51.349 BYT; 00:06:51.349 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:51.349 BYT; 00:06:51.349 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:51.349 12:32:50 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:51.349 12:32:50 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:52.282 The operation has completed successfully. 00:06:52.283 12:32:51 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:53.216 The operation has completed successfully. 00:06:53.216 12:32:52 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:53.475 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:54.041 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.041 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.041 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.041 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:54.041 12:32:53 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:54.041 12:32:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.041 12:32:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.300 [] 00:06:54.300 12:32:53 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.300 12:32:53 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:54.300 12:32:53 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:54.300 12:32:53 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:54.300 12:32:53 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:54.300 12:32:53 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:54.300 12:32:53 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.300 12:32:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.559 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.559 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:54.559 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.559 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.559 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.559 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:54.559 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:54.559 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:54.559 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.559 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:54.559 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:54.560 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "dcf79d7d-2cec-46bc-a946-b9d131533ac5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "dcf79d7d-2cec-46bc-a946-b9d131533ac5",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "d5b309d3-7606-4427-8abf-f6ee1bf78b48"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d5b309d3-7606-4427-8abf-f6ee1bf78b48",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "8e007d41-9d92-4936-90bc-b9582039ba6a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8e007d41-9d92-4936-90bc-b9582039ba6a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "400fbcc4-e658-48ac-9506-c8d233569f24"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "400fbcc4-e658-48ac-9506-c8d233569f24",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4393f5c3-f9c9-4f06-ae53-cec9e7bcb7b4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4393f5c3-f9c9-4f06-ae53-cec9e7bcb7b4",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:54.560 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:54.560 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:54.560 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:54.560 12:32:54 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62458 00:06:54.560 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62458 ']' 00:06:54.560 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62458 00:06:54.560 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:54.560 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.560 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62458 00:06:54.560 killing process with pid 62458 00:06:54.560 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.560 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.560 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62458' 00:06:54.560 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62458 00:06:54.560 12:32:54 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62458 00:06:55.931 12:32:55 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:55.931 12:32:55 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:55.931 12:32:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:55.931 12:32:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.931 12:32:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.931 ************************************ 00:06:55.931 START TEST bdev_hello_world 00:06:55.931 ************************************ 00:06:55.931 12:32:55 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:55.931 [2024-12-14 12:32:55.529106] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:55.931 [2024-12-14 12:32:55.529233] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63078 ] 00:06:56.189 [2024-12-14 12:32:55.684363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.189 [2024-12-14 12:32:55.760121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.753 [2024-12-14 12:32:56.251827] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:56.753 [2024-12-14 12:32:56.251869] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:56.753 [2024-12-14 12:32:56.251886] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:56.753 [2024-12-14 12:32:56.253926] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:56.753 [2024-12-14 12:32:56.254421] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:56.753 [2024-12-14 12:32:56.254446] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:56.753 [2024-12-14 12:32:56.254671] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:56.753 00:06:56.753 [2024-12-14 12:32:56.254691] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:57.320 00:06:57.320 real 0m1.358s 00:06:57.320 user 0m1.080s 00:06:57.320 sys 0m0.174s 00:06:57.320 12:32:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.320 ************************************ 00:06:57.320 END TEST bdev_hello_world 00:06:57.320 ************************************ 00:06:57.320 12:32:56 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:57.320 12:32:56 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:57.320 12:32:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:57.320 12:32:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.320 12:32:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.320 ************************************ 00:06:57.320 START TEST bdev_bounds 00:06:57.320 ************************************ 00:06:57.320 12:32:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:57.320 12:32:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63115 00:06:57.320 12:32:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:57.320 12:32:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63115' 00:06:57.320 Process bdevio pid: 63115 00:06:57.320 12:32:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63115 00:06:57.320 12:32:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63115 ']' 00:06:57.320 12:32:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.320 12:32:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.320 12:32:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.320 12:32:56 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:57.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.320 12:32:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.320 12:32:56 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:57.320 [2024-12-14 12:32:56.925370] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:57.320 [2024-12-14 12:32:56.925610] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63115 ] 00:06:57.578 [2024-12-14 12:32:57.081237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:57.578 [2024-12-14 12:32:57.163978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.578 [2024-12-14 12:32:57.164200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.578 [2024-12-14 12:32:57.164204] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.144 12:32:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.144 12:32:57 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:58.144 12:32:57 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:58.144 I/O targets: 00:06:58.144 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:58.144 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:58.144 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:58.144 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.144 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.144 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:58.144 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:58.144 00:06:58.144 00:06:58.144 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.144 http://cunit.sourceforge.net/ 00:06:58.144 00:06:58.144 00:06:58.144 Suite: bdevio tests on: Nvme3n1 00:06:58.144 Test: blockdev write read block ...passed 00:06:58.144 Test: blockdev write zeroes read block ...passed 00:06:58.144 Test: blockdev write zeroes read no split ...passed 00:06:58.404 Test: blockdev write zeroes read split ...passed 00:06:58.404 Test: blockdev write zeroes read split partial ...passed 00:06:58.404 Test: blockdev reset ...[2024-12-14 12:32:57.916622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:58.405 [2024-12-14 12:32:57.920752] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:58.405 Test: blockdev write read 8 blocks ...uccessful. 00:06:58.405 passed 00:06:58.405 Test: blockdev write read size > 128k ...passed 00:06:58.405 Test: blockdev write read invalid size ...passed 00:06:58.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.405 Test: blockdev write read max offset ...passed 00:06:58.405 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.405 Test: blockdev writev readv 8 blocks ...passed 00:06:58.405 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.405 Test: blockdev writev readv block ...passed 00:06:58.405 Test: blockdev writev readv size > 128k ...passed 00:06:58.405 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.405 Test: blockdev comparev and writev ...[2024-12-14 12:32:57.938167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:58.405 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b3a04000 len:0x1000 00:06:58.405 [2024-12-14 12:32:57.938308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.405 passed 00:06:58.405 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.405 Test: blockdev nvme admin passthru ...[2024-12-14 12:32:57.940418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.405 [2024-12-14 12:32:57.940453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.405 passed 00:06:58.405 Test: blockdev copy ...passed 00:06:58.405 Suite: bdevio tests on: Nvme2n3 00:06:58.405 Test: blockdev write read block ...passed 00:06:58.405 Test: blockdev write zeroes read block ...passed 00:06:58.405 Test: blockdev write zeroes read no split ...passed 00:06:58.405 Test: blockdev write zeroes read split ...passed 00:06:58.405 Test: blockdev write zeroes read split partial ...passed 00:06:58.405 Test: blockdev reset ...[2024-12-14 12:32:58.000376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:58.405 [2024-12-14 12:32:58.004023] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:58.405 passed 00:06:58.405 Test: blockdev write read 8 blocks ...passed 00:06:58.405 Test: blockdev write read size > 128k ...passed 00:06:58.405 Test: blockdev write read invalid size ...passed 00:06:58.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.405 Test: blockdev write read max offset ...passed 00:06:58.405 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.405 Test: blockdev writev readv 8 blocks ...passed 00:06:58.405 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.405 Test: blockdev writev readv block ...passed 00:06:58.405 Test: blockdev writev readv size > 128k ...passed 00:06:58.405 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.405 Test: blockdev comparev and writev ...[2024-12-14 12:32:58.017696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:06:58.405 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b3a02000 len:0x1000 00:06:58.405 [2024-12-14 12:32:58.017824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.405 passed 00:06:58.405 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.405 Test: blockdev nvme admin passthru ...[2024-12-14 12:32:58.019294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.405 [2024-12-14 12:32:58.019328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.405 passed 00:06:58.405 Test: blockdev copy ...passed 00:06:58.405 Suite: bdevio tests on: Nvme2n2 00:06:58.405 Test: blockdev write read block ...passed 00:06:58.405 Test: blockdev write zeroes read block ...passed 00:06:58.405 Test: blockdev write zeroes read no split ...passed 00:06:58.405 Test: blockdev write zeroes read split ...passed 00:06:58.405 Test: blockdev write zeroes read split partial ...passed 00:06:58.405 Test: blockdev reset ...[2024-12-14 12:32:58.071519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:58.405 [2024-12-14 12:32:58.074656] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:58.405 passed 00:06:58.405 Test: blockdev write read 8 blocks ...passed 00:06:58.405 Test: blockdev write read size > 128k ...passed 00:06:58.405 Test: blockdev write read invalid size ...passed 00:06:58.405 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.405 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.405 Test: blockdev write read max offset ...passed 00:06:58.405 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.405 Test: blockdev writev readv 8 blocks ...passed 00:06:58.405 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.405 Test: blockdev writev readv block ...passed 00:06:58.405 Test: blockdev writev readv size > 128k ...passed 00:06:58.405 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.405 Test: blockdev comparev and writev ...[2024-12-14 12:32:58.084089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4038000 len:0x1000 00:06:58.405 [2024-12-14 12:32:58.084243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.405 passed 00:06:58.405 Test: blockdev nvme passthru rw ...passed 00:06:58.405 Test: blockdev nvme passthru vendor specific ...[2024-12-14 12:32:58.085569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.405 [2024-12-14 12:32:58.085695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.405 passed 00:06:58.405 Test: blockdev nvme admin passthru ...passed 00:06:58.405 Test: blockdev copy ...passed 00:06:58.405 Suite: bdevio tests on: Nvme2n1 00:06:58.405 Test: blockdev write read block ...passed 00:06:58.405 Test: blockdev write zeroes read block ...passed 00:06:58.405 Test: blockdev write zeroes read no split ...passed 00:06:58.405 Test: blockdev write zeroes read split ...passed 00:06:58.679 Test: blockdev write zeroes read split partial ...passed 00:06:58.679 Test: blockdev reset ...[2024-12-14 12:32:58.143660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:58.679 [2024-12-14 12:32:58.147206] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:58.679 Test: blockdev write read 8 blocks ...uccessful. 00:06:58.679 passed 00:06:58.679 Test: blockdev write read size > 128k ...passed 00:06:58.679 Test: blockdev write read invalid size ...passed 00:06:58.679 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.679 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.679 Test: blockdev write read max offset ...passed 00:06:58.679 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.679 Test: blockdev writev readv 8 blocks ...passed 00:06:58.679 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.679 Test: blockdev writev readv block ...passed 00:06:58.679 Test: blockdev writev readv size > 128k ...passed 00:06:58.679 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.680 Test: blockdev comparev and writev ...[2024-12-14 12:32:58.163260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d4034000 len:0x1000 00:06:58.680 [2024-12-14 12:32:58.163405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.680 passed 00:06:58.680 Test: blockdev nvme passthru rw ...passed 00:06:58.680 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.680 Test: blockdev nvme admin passthru ...[2024-12-14 12:32:58.165125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:58.680 [2024-12-14 12:32:58.165161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:58.680 passed 00:06:58.680 Test: blockdev copy ...passed 00:06:58.680 Suite: bdevio tests on: Nvme1n1p2 00:06:58.680 Test: blockdev write read block ...passed 00:06:58.680 Test: blockdev write zeroes read block ...passed 00:06:58.680 Test: blockdev write zeroes read no split ...passed 00:06:58.680 Test: blockdev write zeroes read split ...passed 00:06:58.680 Test: blockdev write zeroes read split partial ...passed 00:06:58.680 Test: blockdev reset ...[2024-12-14 12:32:58.228701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:58.680 [2024-12-14 12:32:58.232434] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:58.680 Test: blockdev write read 8 blocks ...uccessful. 00:06:58.680 passed 00:06:58.680 Test: blockdev write read size > 128k ...passed 00:06:58.680 Test: blockdev write read invalid size ...passed 00:06:58.680 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.680 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.680 Test: blockdev write read max offset ...passed 00:06:58.680 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.680 Test: blockdev writev readv 8 blocks ...passed 00:06:58.680 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.680 Test: blockdev writev readv block ...passed 00:06:58.680 Test: blockdev writev readv size > 128k ...passed 00:06:58.680 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.680 Test: blockdev comparev and writev ...[2024-12-14 12:32:58.246897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2d4030000 len:0x1000 00:06:58.680 [2024-12-14 12:32:58.247028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.680 passed 00:06:58.680 Test: blockdev nvme passthru rw ...passed 00:06:58.680 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.680 Test: blockdev nvme admin passthru ...passed 00:06:58.680 Test: blockdev copy ...passed 00:06:58.680 Suite: bdevio tests on: Nvme1n1p1 00:06:58.680 Test: blockdev write read block ...passed 00:06:58.680 Test: blockdev write zeroes read block ...passed 00:06:58.680 Test: blockdev write zeroes read no split ...passed 00:06:58.680 Test: blockdev write zeroes read split ...passed 00:06:58.680 Test: blockdev write zeroes read split partial ...passed 00:06:58.680 Test: blockdev reset ...[2024-12-14 12:32:58.298259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:58.680 [2024-12-14 12:32:58.301904] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:58.680 Test: blockdev write read 8 blocks ...uccessful. 00:06:58.680 passed 00:06:58.680 Test: blockdev write read size > 128k ...passed 00:06:58.680 Test: blockdev write read invalid size ...passed 00:06:58.680 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.680 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.680 Test: blockdev write read max offset ...passed 00:06:58.680 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.680 Test: blockdev writev readv 8 blocks ...passed 00:06:58.680 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.680 Test: blockdev writev readv block ...passed 00:06:58.680 Test: blockdev writev readv size > 128k ...passed 00:06:58.680 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.680 Test: blockdev comparev and writev ...[2024-12-14 12:32:58.317541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b3c0e000 len:0x1000 00:06:58.680 [2024-12-14 12:32:58.317693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:58.680 passed 00:06:58.680 Test: blockdev nvme passthru rw ...passed 00:06:58.680 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.680 Test: blockdev nvme admin passthru ...passed 00:06:58.680 Test: blockdev copy ...passed 00:06:58.680 Suite: bdevio tests on: Nvme0n1 00:06:58.680 Test: blockdev write read block ...passed 00:06:58.680 Test: blockdev write zeroes read block ...passed 00:06:58.680 Test: blockdev write zeroes read no split ...passed 00:06:58.680 Test: blockdev write zeroes read split ...passed 00:06:58.680 Test: blockdev write zeroes read split partial ...passed 00:06:58.680 Test: blockdev reset ...[2024-12-14 12:32:58.370387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:58.680 [2024-12-14 12:32:58.374042] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:06:58.680 Test: blockdev write read 8 blocks ...uccessful. 00:06:58.680 passed 00:06:58.680 Test: blockdev write read size > 128k ...passed 00:06:58.680 Test: blockdev write read invalid size ...passed 00:06:58.680 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:58.680 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:58.680 Test: blockdev write read max offset ...passed 00:06:58.680 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:58.680 Test: blockdev writev readv 8 blocks ...passed 00:06:58.680 Test: blockdev writev readv 30 x 1block ...passed 00:06:58.680 Test: blockdev writev readv block ...passed 00:06:58.680 Test: blockdev writev readv size > 128k ...passed 00:06:58.680 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:58.680 Test: blockdev comparev and writev ...passed 00:06:58.680 Test: blockdev nvme passthru rw ...[2024-12-14 12:32:58.387928] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:58.680 separate metadata which is not supported yet. 00:06:58.680 passed 00:06:58.680 Test: blockdev nvme passthru vendor specific ...passed 00:06:58.680 Test: blockdev nvme admin passthru ...[2024-12-14 12:32:58.389134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:58.680 [2024-12-14 12:32:58.389178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:58.680 passed 00:06:58.680 Test: blockdev copy ...passed 00:06:58.680 00:06:58.680 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.680 suites 7 7 n/a 0 0 00:06:58.680 tests 161 161 161 0 0 00:06:58.680 asserts 1025 1025 1025 0 n/a 00:06:58.680 00:06:58.680 Elapsed time = 1.345 seconds 00:06:58.680 0 00:06:58.968 12:32:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63115 00:06:58.968 12:32:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63115 ']' 00:06:58.968 12:32:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63115 00:06:58.968 12:32:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:58.968 12:32:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.968 12:32:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63115 00:06:58.968 killing process with pid 63115 00:06:58.968 12:32:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.968 12:32:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.968 12:32:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63115' 00:06:58.969 12:32:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63115 00:06:58.969 12:32:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63115 00:06:59.541 12:32:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:59.541 00:06:59.541 real 0m2.121s 00:06:59.541 user 0m5.396s 00:06:59.541 sys 0m0.305s 00:06:59.541 12:32:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.541 ************************************ 00:06:59.541 END TEST bdev_bounds 00:06:59.541 ************************************ 00:06:59.541 12:32:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:59.541 12:32:59 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:59.541 12:32:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:59.541 12:32:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.541 12:32:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:59.541 ************************************ 00:06:59.541 START TEST bdev_nbd 00:06:59.541 ************************************ 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:59.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63169 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63169 /var/tmp/spdk-nbd.sock 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63169 ']' 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.541 12:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:59.541 [2024-12-14 12:32:59.116610] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:59.541 [2024-12-14 12:32:59.116841] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:59.541 [2024-12-14 12:32:59.276639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.801 [2024-12-14 12:32:59.368340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:00.367 12:32:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.625 1+0 records in 00:07:00.625 1+0 records out 00:07:00.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724391 s, 5.7 MB/s 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:00.625 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:00.883 1+0 records in 00:07:00.883 1+0 records out 00:07:00.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00096606 s, 4.2 MB/s 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:00.883 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.141 1+0 records in 00:07:01.141 1+0 records out 00:07:01.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000626859 s, 6.5 MB/s 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:01.141 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.400 1+0 records in 00:07:01.400 1+0 records out 00:07:01.400 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000744788 s, 5.5 MB/s 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:01.400 12:33:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.659 1+0 records in 00:07:01.659 1+0 records out 00:07:01.659 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010751 s, 3.8 MB/s 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.659 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.917 1+0 records in 00:07:01.917 1+0 records out 00:07:01.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113254 s, 3.6 MB/s 00:07:01.917 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.917 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.917 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.917 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.917 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.917 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.917 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:01.918 1+0 records in 00:07:01.918 1+0 records out 00:07:01.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000890769 s, 4.6 MB/s 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:01.918 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.176 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:02.176 { 00:07:02.176 "nbd_device": "/dev/nbd0", 00:07:02.176 "bdev_name": "Nvme0n1" 00:07:02.176 }, 00:07:02.176 { 00:07:02.176 "nbd_device": "/dev/nbd1", 00:07:02.176 "bdev_name": "Nvme1n1p1" 00:07:02.176 }, 00:07:02.176 { 00:07:02.176 "nbd_device": "/dev/nbd2", 00:07:02.176 "bdev_name": "Nvme1n1p2" 00:07:02.176 }, 00:07:02.176 { 00:07:02.176 "nbd_device": "/dev/nbd3", 00:07:02.176 "bdev_name": "Nvme2n1" 00:07:02.176 }, 00:07:02.176 { 00:07:02.176 "nbd_device": "/dev/nbd4", 00:07:02.176 "bdev_name": "Nvme2n2" 00:07:02.176 }, 00:07:02.176 { 00:07:02.176 "nbd_device": "/dev/nbd5", 00:07:02.176 "bdev_name": "Nvme2n3" 00:07:02.176 }, 00:07:02.176 { 00:07:02.176 "nbd_device": "/dev/nbd6", 00:07:02.176 "bdev_name": "Nvme3n1" 00:07:02.176 } 00:07:02.176 ]' 00:07:02.176 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:02.176 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:02.176 { 00:07:02.176 "nbd_device": "/dev/nbd0", 00:07:02.176 "bdev_name": "Nvme0n1" 00:07:02.176 }, 00:07:02.176 { 00:07:02.176 "nbd_device": "/dev/nbd1", 00:07:02.176 "bdev_name": "Nvme1n1p1" 00:07:02.176 }, 00:07:02.176 { 00:07:02.176 "nbd_device": "/dev/nbd2", 00:07:02.176 "bdev_name": "Nvme1n1p2" 00:07:02.176 }, 00:07:02.176 { 00:07:02.176 "nbd_device": "/dev/nbd3", 00:07:02.176 "bdev_name": "Nvme2n1" 00:07:02.176 }, 00:07:02.176 { 00:07:02.176 "nbd_device": "/dev/nbd4", 00:07:02.176 "bdev_name": "Nvme2n2" 00:07:02.176 }, 00:07:02.176 { 00:07:02.176 "nbd_device": "/dev/nbd5", 00:07:02.176 "bdev_name": "Nvme2n3" 00:07:02.176 }, 00:07:02.176 { 00:07:02.176 "nbd_device": "/dev/nbd6", 00:07:02.176 "bdev_name": "Nvme3n1" 00:07:02.176 } 00:07:02.176 ]' 00:07:02.176 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:02.176 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:02.176 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.176 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:02.176 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.176 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:02.176 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.176 12:33:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.434 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.434 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.434 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.434 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.434 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.434 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.434 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.434 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.434 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.434 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.690 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.690 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.690 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.690 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.690 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.690 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.691 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.691 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.691 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.691 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:02.691 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:02.691 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:02.691 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:02.691 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.691 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.691 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:02.949 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.949 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.949 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.949 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:02.949 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:02.949 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:02.949 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:02.949 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.949 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.949 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:02.949 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:02.949 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.949 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.949 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:03.207 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:03.207 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:03.207 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:03.207 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.207 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.207 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:03.207 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.207 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.207 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.207 12:33:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.466 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:03.725 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:03.983 /dev/nbd0 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.983 1+0 records in 00:07:03.983 1+0 records out 00:07:03.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414655 s, 9.9 MB/s 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.983 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.984 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.984 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.984 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:03.984 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:04.242 /dev/nbd1 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.242 1+0 records in 00:07:04.242 1+0 records out 00:07:04.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428151 s, 9.6 MB/s 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:04.242 12:33:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:04.501 /dev/nbd10 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.501 1+0 records in 00:07:04.501 1+0 records out 00:07:04.501 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370735 s, 11.0 MB/s 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:04.501 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:04.759 /dev/nbd11 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.759 1+0 records in 00:07:04.759 1+0 records out 00:07:04.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458698 s, 8.9 MB/s 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:04.759 /dev/nbd12 00:07:04.759 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.018 1+0 records in 00:07:05.018 1+0 records out 00:07:05.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610343 s, 6.7 MB/s 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:05.018 /dev/nbd13 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.018 1+0 records in 00:07:05.018 1+0 records out 00:07:05.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678414 s, 6.0 MB/s 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:05.018 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:05.276 /dev/nbd14 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.276 1+0 records in 00:07:05.276 1+0 records out 00:07:05.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000688215 s, 6.0 MB/s 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.276 12:33:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.534 { 00:07:05.534 "nbd_device": "/dev/nbd0", 00:07:05.534 "bdev_name": "Nvme0n1" 00:07:05.534 }, 00:07:05.534 { 00:07:05.534 "nbd_device": "/dev/nbd1", 00:07:05.534 "bdev_name": "Nvme1n1p1" 00:07:05.534 }, 00:07:05.534 { 00:07:05.534 "nbd_device": "/dev/nbd10", 00:07:05.534 "bdev_name": "Nvme1n1p2" 00:07:05.534 }, 00:07:05.534 { 00:07:05.534 "nbd_device": "/dev/nbd11", 00:07:05.534 "bdev_name": "Nvme2n1" 00:07:05.534 }, 00:07:05.534 { 00:07:05.534 "nbd_device": "/dev/nbd12", 00:07:05.534 "bdev_name": "Nvme2n2" 00:07:05.534 }, 00:07:05.534 { 00:07:05.534 "nbd_device": "/dev/nbd13", 00:07:05.534 "bdev_name": "Nvme2n3" 00:07:05.534 }, 00:07:05.534 { 00:07:05.534 "nbd_device": "/dev/nbd14", 00:07:05.534 "bdev_name": "Nvme3n1" 00:07:05.534 } 00:07:05.534 ]' 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.534 { 00:07:05.534 "nbd_device": "/dev/nbd0", 00:07:05.534 "bdev_name": "Nvme0n1" 00:07:05.534 }, 00:07:05.534 { 00:07:05.534 "nbd_device": "/dev/nbd1", 00:07:05.534 "bdev_name": "Nvme1n1p1" 00:07:05.534 }, 00:07:05.534 { 00:07:05.534 "nbd_device": "/dev/nbd10", 00:07:05.534 "bdev_name": "Nvme1n1p2" 00:07:05.534 }, 00:07:05.534 { 00:07:05.534 "nbd_device": "/dev/nbd11", 00:07:05.534 "bdev_name": "Nvme2n1" 00:07:05.534 }, 00:07:05.534 { 00:07:05.534 "nbd_device": "/dev/nbd12", 00:07:05.534 "bdev_name": "Nvme2n2" 00:07:05.534 }, 00:07:05.534 { 00:07:05.534 "nbd_device": "/dev/nbd13", 00:07:05.534 "bdev_name": "Nvme2n3" 00:07:05.534 }, 00:07:05.534 { 00:07:05.534 "nbd_device": "/dev/nbd14", 00:07:05.534 "bdev_name": "Nvme3n1" 00:07:05.534 } 00:07:05.534 ]' 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.534 /dev/nbd1 00:07:05.534 /dev/nbd10 00:07:05.534 /dev/nbd11 00:07:05.534 /dev/nbd12 00:07:05.534 /dev/nbd13 00:07:05.534 /dev/nbd14' 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.534 /dev/nbd1 00:07:05.534 /dev/nbd10 00:07:05.534 /dev/nbd11 00:07:05.534 /dev/nbd12 00:07:05.534 /dev/nbd13 00:07:05.534 /dev/nbd14' 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:05.534 256+0 records in 00:07:05.534 256+0 records out 00:07:05.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00980482 s, 107 MB/s 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.534 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.793 256+0 records in 00:07:05.793 256+0 records out 00:07:05.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0820336 s, 12.8 MB/s 00:07:05.793 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.793 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.793 256+0 records in 00:07:05.793 256+0 records out 00:07:05.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0928166 s, 11.3 MB/s 00:07:05.793 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.793 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:05.793 256+0 records in 00:07:05.793 256+0 records out 00:07:05.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0717556 s, 14.6 MB/s 00:07:05.793 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.793 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:06.052 256+0 records in 00:07:06.052 256+0 records out 00:07:06.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0700338 s, 15.0 MB/s 00:07:06.052 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.052 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:06.052 256+0 records in 00:07:06.052 256+0 records out 00:07:06.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.066439 s, 15.8 MB/s 00:07:06.052 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.052 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:06.052 256+0 records in 00:07:06.052 256+0 records out 00:07:06.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0674915 s, 15.5 MB/s 00:07:06.052 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:06.052 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:06.052 256+0 records in 00:07:06.052 256+0 records out 00:07:06.052 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0682211 s, 15.4 MB/s 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.310 12:33:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.310 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.311 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.311 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.311 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.311 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.311 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.311 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.311 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.311 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.311 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.569 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.569 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.569 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.569 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.569 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.569 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.569 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.569 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.569 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.569 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:06.827 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:06.827 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:06.827 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:06.828 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.828 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.828 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:06.828 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.828 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.828 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.828 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:07.086 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:07.086 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:07.086 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:07.086 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.086 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.086 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:07.086 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.086 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.086 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.086 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:07.344 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:07.344 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:07.344 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:07.344 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.344 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.344 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:07.344 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.344 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.344 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.344 12:33:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:07.344 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:07.344 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:07.344 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:07.344 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.344 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.344 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:07.344 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.344 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.344 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.344 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:07.602 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:07.602 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:07.602 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:07.602 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.602 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.602 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:07.602 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.602 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.602 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.602 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.602 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.860 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.860 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.860 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.860 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.860 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.860 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.860 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:07.860 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.860 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.861 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:07.861 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:07.861 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:07.861 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:07.861 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.861 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:07.861 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:08.119 malloc_lvol_verify 00:07:08.119 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:08.377 d0185f4c-eea8-4df7-ad53-1e824b08258b 00:07:08.377 12:33:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:08.636 57022036-d344-49d4-ada5-20f55895a1e9 00:07:08.636 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:08.636 /dev/nbd0 00:07:08.636 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:08.636 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:08.636 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:08.636 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:08.636 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:08.636 mke2fs 1.47.0 (5-Feb-2023) 00:07:08.636 Discarding device blocks: 0/4096 done 00:07:08.636 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:08.636 00:07:08.636 Allocating group tables: 0/1 done 00:07:08.636 Writing inode tables: 0/1 done 00:07:08.636 Creating journal (1024 blocks): done 00:07:08.894 Writing superblocks and filesystem accounting information: 0/1 done 00:07:08.894 00:07:08.894 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:08.894 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63169 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63169 ']' 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63169 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63169 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.895 killing process with pid 63169 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63169' 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63169 00:07:08.895 12:33:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63169 00:07:09.841 12:33:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:09.841 00:07:09.841 real 0m10.165s 00:07:09.841 user 0m14.605s 00:07:09.841 sys 0m3.384s 00:07:09.841 12:33:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.841 12:33:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:09.841 ************************************ 00:07:09.841 END TEST bdev_nbd 00:07:09.841 ************************************ 00:07:09.841 12:33:09 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:09.841 12:33:09 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:09.841 12:33:09 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:09.841 skipping fio tests on NVMe due to multi-ns failures. 00:07:09.841 12:33:09 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:09.841 12:33:09 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:09.841 12:33:09 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:09.841 12:33:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:09.841 12:33:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.841 12:33:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:09.841 ************************************ 00:07:09.841 START TEST bdev_verify 00:07:09.841 ************************************ 00:07:09.841 12:33:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:09.841 [2024-12-14 12:33:09.317476] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:09.841 [2024-12-14 12:33:09.317591] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63575 ] 00:07:09.841 [2024-12-14 12:33:09.472995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.841 [2024-12-14 12:33:09.551661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.841 [2024-12-14 12:33:09.551842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.467 Running I/O for 5 seconds... 00:07:12.776 20352.00 IOPS, 79.50 MiB/s [2024-12-14T12:33:13.450Z] 21760.00 IOPS, 85.00 MiB/s [2024-12-14T12:33:14.388Z] 22720.00 IOPS, 88.75 MiB/s [2024-12-14T12:33:15.324Z] 22528.00 IOPS, 88.00 MiB/s [2024-12-14T12:33:15.324Z] 21952.00 IOPS, 85.75 MiB/s 00:07:15.587 Latency(us) 00:07:15.587 [2024-12-14T12:33:15.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.587 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.587 Verification LBA range: start 0x0 length 0xbd0bd 00:07:15.587 Nvme0n1 : 5.06 1516.94 5.93 0.00 0.00 84140.96 16031.11 84289.38 00:07:15.587 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.587 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:15.587 Nvme0n1 : 5.07 1591.09 6.22 0.00 0.00 80274.58 13409.67 83886.08 00:07:15.587 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.587 Verification LBA range: start 0x0 length 0x4ff80 00:07:15.587 Nvme1n1p1 : 5.06 1516.45 5.92 0.00 0.00 83911.75 18753.38 73400.32 00:07:15.587 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.587 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:15.587 Nvme1n1p1 : 5.07 1590.29 6.21 0.00 0.00 80193.14 12804.73 75820.11 00:07:15.587 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.587 Verification LBA range: start 0x0 length 0x4ff7f 00:07:15.587 Nvme1n1p2 : 5.07 1515.48 5.92 0.00 0.00 83799.93 19156.68 72997.02 00:07:15.587 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.587 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:15.587 Nvme1n1p2 : 5.08 1588.96 6.21 0.00 0.00 80083.47 15526.99 70980.53 00:07:15.587 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.587 Verification LBA range: start 0x0 length 0x80000 00:07:15.587 Nvme2n1 : 5.07 1514.81 5.92 0.00 0.00 83660.49 19055.85 70980.53 00:07:15.587 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.587 Verification LBA range: start 0x80000 length 0x80000 00:07:15.587 Nvme2n1 : 5.08 1588.53 6.21 0.00 0.00 79925.83 15930.29 67754.14 00:07:15.587 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.587 Verification LBA range: start 0x0 length 0x80000 00:07:15.587 Nvme2n2 : 5.07 1514.18 5.91 0.00 0.00 83508.39 18551.73 71787.13 00:07:15.587 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.587 Verification LBA range: start 0x80000 length 0x80000 00:07:15.587 Nvme2n2 : 5.08 1588.12 6.20 0.00 0.00 79786.05 15224.52 70577.23 00:07:15.587 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.587 Verification LBA range: start 0x0 length 0x80000 00:07:15.587 Nvme2n3 : 5.08 1523.80 5.95 0.00 0.00 82873.57 2445.00 73400.32 00:07:15.587 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.587 Verification LBA range: start 0x80000 length 0x80000 00:07:15.587 Nvme2n3 : 5.08 1587.67 6.20 0.00 0.00 79635.57 14317.10 73803.62 00:07:15.587 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:15.587 Verification LBA range: start 0x0 length 0x20000 00:07:15.587 Nvme3n1 : 5.09 1533.37 5.99 0.00 0.00 82275.35 6654.42 76626.71 00:07:15.587 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:15.587 Verification LBA range: start 0x20000 length 0x20000 00:07:15.587 Nvme3n1 : 5.08 1587.23 6.20 0.00 0.00 79493.05 13611.32 78239.90 00:07:15.587 [2024-12-14T12:33:15.324Z] =================================================================================================================== 00:07:15.587 [2024-12-14T12:33:15.324Z] Total : 21756.92 84.99 0.00 0.00 81641.29 2445.00 84289.38 00:07:16.964 00:07:16.964 real 0m7.212s 00:07:16.964 user 0m13.568s 00:07:16.964 sys 0m0.199s 00:07:16.964 12:33:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.964 12:33:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:16.964 ************************************ 00:07:16.964 END TEST bdev_verify 00:07:16.964 ************************************ 00:07:16.964 12:33:16 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:16.964 12:33:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:16.964 12:33:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.964 12:33:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.964 ************************************ 00:07:16.964 START TEST bdev_verify_big_io 00:07:16.964 ************************************ 00:07:16.964 12:33:16 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:16.964 [2024-12-14 12:33:16.574623] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:16.964 [2024-12-14 12:33:16.574739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63673 ] 00:07:17.222 [2024-12-14 12:33:16.732527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:17.222 [2024-12-14 12:33:16.829848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.222 [2024-12-14 12:33:16.830040] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.788 Running I/O for 5 seconds... 00:07:23.645 2018.00 IOPS, 126.12 MiB/s [2024-12-14T12:33:23.947Z] 2869.50 IOPS, 179.34 MiB/s [2024-12-14T12:33:23.947Z] 3323.33 IOPS, 207.71 MiB/s 00:07:24.210 Latency(us) 00:07:24.210 [2024-12-14T12:33:23.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.210 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.210 Verification LBA range: start 0x0 length 0xbd0b 00:07:24.210 Nvme0n1 : 5.99 106.77 6.67 0.00 0.00 1133435.51 10334.52 1277649.53 00:07:24.210 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.210 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:24.210 Nvme0n1 : 5.74 112.23 7.01 0.00 0.00 1091952.97 14619.57 1148594.02 00:07:24.210 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.210 Verification LBA range: start 0x0 length 0x4ff8 00:07:24.210 Nvme1n1p1 : 6.00 105.75 6.61 0.00 0.00 1101757.33 101227.91 1084066.26 00:07:24.210 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.210 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:24.210 Nvme1n1p1 : 5.74 115.16 7.20 0.00 0.00 1032354.37 93161.94 987274.63 00:07:24.210 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.210 Verification LBA range: start 0x0 length 0x4ff7 00:07:24.210 Nvme1n1p2 : 6.05 110.81 6.93 0.00 0.00 1045053.46 126635.72 1258291.20 00:07:24.210 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.210 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:24.210 Nvme1n1p2 : 5.86 119.38 7.46 0.00 0.00 976574.82 112923.57 1038896.84 00:07:24.210 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.210 Verification LBA range: start 0x0 length 0x8000 00:07:24.210 Nvme2n1 : 6.06 109.33 6.83 0.00 0.00 1029839.50 48799.11 1768060.46 00:07:24.210 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.210 Verification LBA range: start 0x8000 length 0x8000 00:07:24.210 Nvme2n1 : 5.97 124.97 7.81 0.00 0.00 911794.03 68560.74 961463.53 00:07:24.210 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.210 Verification LBA range: start 0x0 length 0x8000 00:07:24.210 Nvme2n2 : 6.08 113.42 7.09 0.00 0.00 962254.27 16131.94 1780966.01 00:07:24.210 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.210 Verification LBA range: start 0x8000 length 0x8000 00:07:24.210 Nvme2n2 : 6.00 128.23 8.01 0.00 0.00 864691.83 43556.23 1051802.39 00:07:24.210 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.210 Verification LBA range: start 0x0 length 0x8000 00:07:24.210 Nvme2n3 : 6.10 118.73 7.42 0.00 0.00 888474.66 18047.61 1587382.74 00:07:24.210 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.210 Verification LBA range: start 0x8000 length 0x8000 00:07:24.210 Nvme2n3 : 6.03 132.89 8.31 0.00 0.00 810090.71 26214.40 1058255.16 00:07:24.210 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:24.210 Verification LBA range: start 0x0 length 0x2000 00:07:24.210 Nvme3n1 : 6.16 153.40 9.59 0.00 0.00 672610.47 381.24 1819682.66 00:07:24.210 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:24.210 Verification LBA range: start 0x2000 length 0x2000 00:07:24.210 Nvme3n1 : 6.05 147.52 9.22 0.00 0.00 713969.70 7864.32 1077613.49 00:07:24.210 [2024-12-14T12:33:23.947Z] =================================================================================================================== 00:07:24.210 [2024-12-14T12:33:23.947Z] Total : 1698.60 106.16 0.00 0.00 928325.55 381.24 1819682.66 00:07:25.580 00:07:25.580 real 0m8.769s 00:07:25.580 user 0m16.670s 00:07:25.580 sys 0m0.223s 00:07:25.580 12:33:25 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.580 12:33:25 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:25.580 ************************************ 00:07:25.580 END TEST bdev_verify_big_io 00:07:25.580 ************************************ 00:07:25.837 12:33:25 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.837 12:33:25 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:25.837 12:33:25 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.837 12:33:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:25.837 ************************************ 00:07:25.837 START TEST bdev_write_zeroes 00:07:25.837 ************************************ 00:07:25.837 12:33:25 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:25.837 [2024-12-14 12:33:25.398643] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:25.837 [2024-12-14 12:33:25.398757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63782 ] 00:07:25.837 [2024-12-14 12:33:25.556008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.095 [2024-12-14 12:33:25.635263] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.660 Running I/O for 1 seconds... 00:07:27.591 74816.00 IOPS, 292.25 MiB/s 00:07:27.591 Latency(us) 00:07:27.591 [2024-12-14T12:33:27.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.591 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.591 Nvme0n1 : 1.02 10617.41 41.47 0.00 0.00 12030.69 9578.34 24197.91 00:07:27.591 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.591 Nvme1n1p1 : 1.03 10604.47 41.42 0.00 0.00 12029.84 9326.28 23895.43 00:07:27.591 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.591 Nvme1n1p2 : 1.03 10591.64 41.37 0.00 0.00 12020.98 9427.10 23290.49 00:07:27.591 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.591 Nvme2n1 : 1.03 10579.45 41.33 0.00 0.00 12011.85 9628.75 22483.89 00:07:27.591 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.591 Nvme2n2 : 1.03 10567.64 41.28 0.00 0.00 11993.03 9275.86 21979.77 00:07:27.591 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.591 Nvme2n3 : 1.03 10555.89 41.23 0.00 0.00 11979.63 8217.21 22685.54 00:07:27.591 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:27.591 Nvme3n1 : 1.03 10544.07 41.19 0.00 0.00 11977.24 7662.67 24500.38 00:07:27.591 [2024-12-14T12:33:27.328Z] =================================================================================================================== 00:07:27.591 [2024-12-14T12:33:27.328Z] Total : 74060.58 289.30 0.00 0.00 12006.18 7662.67 24500.38 00:07:28.523 00:07:28.523 real 0m2.619s 00:07:28.523 user 0m2.333s 00:07:28.524 sys 0m0.173s 00:07:28.524 12:33:27 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.524 12:33:27 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:28.524 ************************************ 00:07:28.524 END TEST bdev_write_zeroes 00:07:28.524 ************************************ 00:07:28.524 12:33:27 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:28.524 12:33:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:28.524 12:33:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.524 12:33:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:28.524 ************************************ 00:07:28.524 START TEST bdev_json_nonenclosed 00:07:28.524 ************************************ 00:07:28.524 12:33:28 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:28.524 [2024-12-14 12:33:28.068470] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:28.524 [2024-12-14 12:33:28.068667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63835 ] 00:07:28.524 [2024-12-14 12:33:28.224478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.781 [2024-12-14 12:33:28.319568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.781 [2024-12-14 12:33:28.319643] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:28.781 [2024-12-14 12:33:28.319659] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:28.781 [2024-12-14 12:33:28.319668] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:28.781 00:07:28.781 real 0m0.486s 00:07:28.781 user 0m0.287s 00:07:28.781 sys 0m0.095s 00:07:28.781 12:33:28 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.781 12:33:28 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:28.781 ************************************ 00:07:28.781 END TEST bdev_json_nonenclosed 00:07:28.781 ************************************ 00:07:29.038 12:33:28 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.038 12:33:28 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:29.038 12:33:28 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.038 12:33:28 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:29.038 ************************************ 00:07:29.038 START TEST bdev_json_nonarray 00:07:29.038 ************************************ 00:07:29.038 12:33:28 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:29.038 [2024-12-14 12:33:28.625792] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:29.038 [2024-12-14 12:33:28.625908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63855 ] 00:07:29.296 [2024-12-14 12:33:28.779523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.296 [2024-12-14 12:33:28.877294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.296 [2024-12-14 12:33:28.877372] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:29.296 [2024-12-14 12:33:28.877389] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:29.296 [2024-12-14 12:33:28.877398] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.554 00:07:29.554 real 0m0.489s 00:07:29.554 user 0m0.299s 00:07:29.554 sys 0m0.085s 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.554 ************************************ 00:07:29.554 END TEST bdev_json_nonarray 00:07:29.554 ************************************ 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:29.554 12:33:29 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:29.554 12:33:29 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:29.554 12:33:29 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:29.554 12:33:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.554 12:33:29 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.554 12:33:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:29.554 ************************************ 00:07:29.554 START TEST bdev_gpt_uuid 00:07:29.554 ************************************ 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:29.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63886 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63886 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63886 ']' 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.554 12:33:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:29.554 [2024-12-14 12:33:29.177179] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:29.554 [2024-12-14 12:33:29.177370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63886 ] 00:07:29.813 [2024-12-14 12:33:29.328898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.813 [2024-12-14 12:33:29.425694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.379 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.379 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:30.379 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:30.379 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.379 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:30.637 Some configs were skipped because the RPC state that can call them passed over. 00:07:30.637 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.637 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:30.637 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.637 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:30.637 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.637 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:30.637 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.637 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:30.637 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.637 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:30.637 { 00:07:30.637 "name": "Nvme1n1p1", 00:07:30.637 "aliases": [ 00:07:30.637 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:30.637 ], 00:07:30.637 "product_name": "GPT Disk", 00:07:30.637 "block_size": 4096, 00:07:30.637 "num_blocks": 655104, 00:07:30.637 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:30.637 "assigned_rate_limits": { 00:07:30.637 "rw_ios_per_sec": 0, 00:07:30.637 "rw_mbytes_per_sec": 0, 00:07:30.637 "r_mbytes_per_sec": 0, 00:07:30.637 "w_mbytes_per_sec": 0 00:07:30.637 }, 00:07:30.637 "claimed": false, 00:07:30.637 "zoned": false, 00:07:30.637 "supported_io_types": { 00:07:30.637 "read": true, 00:07:30.637 "write": true, 00:07:30.637 "unmap": true, 00:07:30.637 "flush": true, 00:07:30.637 "reset": true, 00:07:30.637 "nvme_admin": false, 00:07:30.637 "nvme_io": false, 00:07:30.637 "nvme_io_md": false, 00:07:30.637 "write_zeroes": true, 00:07:30.637 "zcopy": false, 00:07:30.637 "get_zone_info": false, 00:07:30.637 "zone_management": false, 00:07:30.637 "zone_append": false, 00:07:30.637 "compare": true, 00:07:30.637 "compare_and_write": false, 00:07:30.637 "abort": true, 00:07:30.637 "seek_hole": false, 00:07:30.637 "seek_data": false, 00:07:30.637 "copy": true, 00:07:30.637 "nvme_iov_md": false 00:07:30.637 }, 00:07:30.637 "driver_specific": { 00:07:30.637 "gpt": { 00:07:30.637 "base_bdev": "Nvme1n1", 00:07:30.637 "offset_blocks": 256, 00:07:30.637 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:30.637 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:30.637 "partition_name": "SPDK_TEST_first" 00:07:30.637 } 00:07:30.637 } 00:07:30.637 } 00:07:30.637 ]' 00:07:30.637 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:30.896 { 00:07:30.896 "name": "Nvme1n1p2", 00:07:30.896 "aliases": [ 00:07:30.896 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:30.896 ], 00:07:30.896 "product_name": "GPT Disk", 00:07:30.896 "block_size": 4096, 00:07:30.896 "num_blocks": 655103, 00:07:30.896 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:30.896 "assigned_rate_limits": { 00:07:30.896 "rw_ios_per_sec": 0, 00:07:30.896 "rw_mbytes_per_sec": 0, 00:07:30.896 "r_mbytes_per_sec": 0, 00:07:30.896 "w_mbytes_per_sec": 0 00:07:30.896 }, 00:07:30.896 "claimed": false, 00:07:30.896 "zoned": false, 00:07:30.896 "supported_io_types": { 00:07:30.896 "read": true, 00:07:30.896 "write": true, 00:07:30.896 "unmap": true, 00:07:30.896 "flush": true, 00:07:30.896 "reset": true, 00:07:30.896 "nvme_admin": false, 00:07:30.896 "nvme_io": false, 00:07:30.896 "nvme_io_md": false, 00:07:30.896 "write_zeroes": true, 00:07:30.896 "zcopy": false, 00:07:30.896 "get_zone_info": false, 00:07:30.896 "zone_management": false, 00:07:30.896 "zone_append": false, 00:07:30.896 "compare": true, 00:07:30.896 "compare_and_write": false, 00:07:30.896 "abort": true, 00:07:30.896 "seek_hole": false, 00:07:30.896 "seek_data": false, 00:07:30.896 "copy": true, 00:07:30.896 "nvme_iov_md": false 00:07:30.896 }, 00:07:30.896 "driver_specific": { 00:07:30.896 "gpt": { 00:07:30.896 "base_bdev": "Nvme1n1", 00:07:30.896 "offset_blocks": 655360, 00:07:30.896 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:30.896 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:30.896 "partition_name": "SPDK_TEST_second" 00:07:30.896 } 00:07:30.896 } 00:07:30.896 } 00:07:30.896 ]' 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63886 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63886 ']' 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63886 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63886 00:07:30.896 killing process with pid 63886 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63886' 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63886 00:07:30.896 12:33:30 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63886 00:07:32.376 ************************************ 00:07:32.376 END TEST bdev_gpt_uuid 00:07:32.376 ************************************ 00:07:32.376 00:07:32.376 real 0m2.979s 00:07:32.376 user 0m3.078s 00:07:32.376 sys 0m0.369s 00:07:32.376 12:33:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.376 12:33:32 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:32.636 12:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:32.636 12:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:32.636 12:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:32.636 12:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:32.636 12:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:32.636 12:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:32.636 12:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:32.636 12:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:32.636 12:33:32 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:32.897 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:32.897 Waiting for block devices as requested 00:07:33.158 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:33.158 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:33.158 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:33.158 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.446 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:38.446 12:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:38.446 12:33:37 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:38.707 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:38.707 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:38.707 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:38.707 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:38.707 12:33:38 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:38.707 00:07:38.707 real 0m54.462s 00:07:38.707 user 1m9.887s 00:07:38.707 sys 0m7.621s 00:07:38.707 12:33:38 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.707 ************************************ 00:07:38.707 12:33:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.707 END TEST blockdev_nvme_gpt 00:07:38.707 ************************************ 00:07:38.707 12:33:38 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:38.707 12:33:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.707 12:33:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.707 12:33:38 -- common/autotest_common.sh@10 -- # set +x 00:07:38.707 ************************************ 00:07:38.707 START TEST nvme 00:07:38.707 ************************************ 00:07:38.707 12:33:38 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:38.707 * Looking for test storage... 00:07:38.707 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:38.707 12:33:38 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:38.707 12:33:38 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:38.707 12:33:38 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:38.969 12:33:38 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:38.969 12:33:38 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.969 12:33:38 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.969 12:33:38 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.969 12:33:38 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.969 12:33:38 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.969 12:33:38 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.969 12:33:38 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.969 12:33:38 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.969 12:33:38 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.969 12:33:38 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.969 12:33:38 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.969 12:33:38 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:38.969 12:33:38 nvme -- scripts/common.sh@345 -- # : 1 00:07:38.969 12:33:38 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.969 12:33:38 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.969 12:33:38 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:38.969 12:33:38 nvme -- scripts/common.sh@353 -- # local d=1 00:07:38.969 12:33:38 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.969 12:33:38 nvme -- scripts/common.sh@355 -- # echo 1 00:07:38.969 12:33:38 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.969 12:33:38 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:38.969 12:33:38 nvme -- scripts/common.sh@353 -- # local d=2 00:07:38.969 12:33:38 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.969 12:33:38 nvme -- scripts/common.sh@355 -- # echo 2 00:07:38.969 12:33:38 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.970 12:33:38 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.970 12:33:38 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.970 12:33:38 nvme -- scripts/common.sh@368 -- # return 0 00:07:38.970 12:33:38 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.970 12:33:38 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:38.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.970 --rc genhtml_branch_coverage=1 00:07:38.970 --rc genhtml_function_coverage=1 00:07:38.970 --rc genhtml_legend=1 00:07:38.970 --rc geninfo_all_blocks=1 00:07:38.970 --rc geninfo_unexecuted_blocks=1 00:07:38.970 00:07:38.970 ' 00:07:38.970 12:33:38 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:38.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.970 --rc genhtml_branch_coverage=1 00:07:38.970 --rc genhtml_function_coverage=1 00:07:38.970 --rc genhtml_legend=1 00:07:38.970 --rc geninfo_all_blocks=1 00:07:38.970 --rc geninfo_unexecuted_blocks=1 00:07:38.970 00:07:38.970 ' 00:07:38.970 12:33:38 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:38.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.970 --rc genhtml_branch_coverage=1 00:07:38.970 --rc genhtml_function_coverage=1 00:07:38.970 --rc genhtml_legend=1 00:07:38.970 --rc geninfo_all_blocks=1 00:07:38.970 --rc geninfo_unexecuted_blocks=1 00:07:38.970 00:07:38.970 ' 00:07:38.970 12:33:38 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:38.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.970 --rc genhtml_branch_coverage=1 00:07:38.970 --rc genhtml_function_coverage=1 00:07:38.970 --rc genhtml_legend=1 00:07:38.970 --rc geninfo_all_blocks=1 00:07:38.970 --rc geninfo_unexecuted_blocks=1 00:07:38.970 00:07:38.970 ' 00:07:38.970 12:33:38 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:39.541 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.803 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.064 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.064 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.064 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:40.064 12:33:39 nvme -- nvme/nvme.sh@79 -- # uname 00:07:40.064 12:33:39 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:40.064 12:33:39 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:40.064 12:33:39 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:40.064 12:33:39 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:40.064 12:33:39 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:40.064 12:33:39 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:40.064 Waiting for stub to ready for secondary processes... 00:07:40.064 12:33:39 nvme -- common/autotest_common.sh@1075 -- # stubpid=64521 00:07:40.064 12:33:39 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:40.064 12:33:39 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:40.064 12:33:39 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64521 ]] 00:07:40.064 12:33:39 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:40.064 12:33:39 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:40.064 [2024-12-14 12:33:39.747143] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:40.064 [2024-12-14 12:33:39.747316] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:41.006 12:33:40 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:41.006 12:33:40 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64521 ]] 00:07:41.006 12:33:40 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:41.949 [2024-12-14 12:33:41.314277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.949 [2024-12-14 12:33:41.475791] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.949 [2024-12-14 12:33:41.476164] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.949 [2024-12-14 12:33:41.476345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.949 [2024-12-14 12:33:41.496289] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:41.949 [2024-12-14 12:33:41.496355] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:41.949 [2024-12-14 12:33:41.506572] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:41.949 [2024-12-14 12:33:41.506748] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:41.949 [2024-12-14 12:33:41.509535] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:41.949 [2024-12-14 12:33:41.509827] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:41.949 [2024-12-14 12:33:41.509906] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:41.949 [2024-12-14 12:33:41.512321] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:41.949 [2024-12-14 12:33:41.512550] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:41.949 [2024-12-14 12:33:41.512622] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:41.949 [2024-12-14 12:33:41.516395] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:41.949 [2024-12-14 12:33:41.516620] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:41.949 [2024-12-14 12:33:41.516681] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:41.949 [2024-12-14 12:33:41.516729] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:41.949 [2024-12-14 12:33:41.516766] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:42.209 done. 00:07:42.209 12:33:41 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:42.209 12:33:41 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:42.209 12:33:41 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:42.209 12:33:41 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:42.209 12:33:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.209 12:33:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.209 ************************************ 00:07:42.209 START TEST nvme_reset 00:07:42.209 ************************************ 00:07:42.209 12:33:41 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:42.469 Initializing NVMe Controllers 00:07:42.469 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:42.469 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:42.469 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:42.469 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:42.469 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:42.469 00:07:42.469 real 0m0.257s 00:07:42.469 user 0m0.082s 00:07:42.469 sys 0m0.118s 00:07:42.469 12:33:41 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.469 12:33:41 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:42.469 ************************************ 00:07:42.469 END TEST nvme_reset 00:07:42.469 ************************************ 00:07:42.469 12:33:42 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:42.469 12:33:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.469 12:33:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.469 12:33:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.469 ************************************ 00:07:42.469 START TEST nvme_identify 00:07:42.469 ************************************ 00:07:42.469 12:33:42 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:42.469 12:33:42 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:42.469 12:33:42 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:42.469 12:33:42 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:42.469 12:33:42 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:42.469 12:33:42 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:42.469 12:33:42 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:42.469 12:33:42 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:42.469 12:33:42 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:42.469 12:33:42 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:42.469 12:33:42 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:42.469 12:33:42 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:42.469 12:33:42 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:42.753 [2024-12-14 12:33:42.314297] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64554 terminated unexpected 00:07:42.753 ===================================================== 00:07:42.753 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:42.753 ===================================================== 00:07:42.753 Controller Capabilities/Features 00:07:42.753 ================================ 00:07:42.753 Vendor ID: 1b36 00:07:42.753 Subsystem Vendor ID: 1af4 00:07:42.753 Serial Number: 12343 00:07:42.753 Model Number: QEMU NVMe Ctrl 00:07:42.753 Firmware Version: 8.0.0 00:07:42.753 Recommended Arb Burst: 6 00:07:42.753 IEEE OUI Identifier: 00 54 52 00:07:42.753 Multi-path I/O 00:07:42.753 May have multiple subsystem ports: No 00:07:42.753 May have multiple controllers: Yes 00:07:42.753 Associated with SR-IOV VF: No 00:07:42.753 Max Data Transfer Size: 524288 00:07:42.753 Max Number of Namespaces: 256 00:07:42.753 Max Number of I/O Queues: 64 00:07:42.753 NVMe Specification Version (VS): 1.4 00:07:42.753 NVMe Specification Version (Identify): 1.4 00:07:42.753 Maximum Queue Entries: 2048 00:07:42.753 Contiguous Queues Required: Yes 00:07:42.753 Arbitration Mechanisms Supported 00:07:42.753 Weighted Round Robin: Not Supported 00:07:42.753 Vendor Specific: Not Supported 00:07:42.753 Reset Timeout: 7500 ms 00:07:42.753 Doorbell Stride: 4 bytes 00:07:42.753 NVM Subsystem Reset: Not Supported 00:07:42.753 Command Sets Supported 00:07:42.753 NVM Command Set: Supported 00:07:42.753 Boot Partition: Not Supported 00:07:42.753 Memory Page Size Minimum: 4096 bytes 00:07:42.753 Memory Page Size Maximum: 65536 bytes 00:07:42.753 Persistent Memory Region: Not Supported 00:07:42.753 Optional Asynchronous Events Supported 00:07:42.753 Namespace Attribute Notices: Supported 00:07:42.753 Firmware Activation Notices: Not Supported 00:07:42.753 ANA Change Notices: Not Supported 00:07:42.753 PLE Aggregate Log Change Notices: Not Supported 00:07:42.753 LBA Status Info Alert Notices: Not Supported 00:07:42.753 EGE Aggregate Log Change Notices: Not Supported 00:07:42.753 Normal NVM Subsystem Shutdown event: Not Supported 00:07:42.753 Zone Descriptor Change Notices: Not Supported 00:07:42.753 Discovery Log Change Notices: Not Supported 00:07:42.753 Controller Attributes 00:07:42.753 128-bit Host Identifier: Not Supported 00:07:42.753 Non-Operational Permissive Mode: Not Supported 00:07:42.753 NVM Sets: Not Supported 00:07:42.753 Read Recovery Levels: Not Supported 00:07:42.753 Endurance Groups: Supported 00:07:42.753 Predictable Latency Mode: Not Supported 00:07:42.753 Traffic Based Keep ALive: Not Supported 00:07:42.753 Namespace Granularity: Not Supported 00:07:42.753 SQ Associations: Not Supported 00:07:42.753 UUID List: Not Supported 00:07:42.753 Multi-Domain Subsystem: Not Supported 00:07:42.753 Fixed Capacity Management: Not Supported 00:07:42.753 Variable Capacity Management: Not Supported 00:07:42.753 Delete Endurance Group: Not Supported 00:07:42.753 Delete NVM Set: Not Supported 00:07:42.753 Extended LBA Formats Supported: Supported 00:07:42.753 Flexible Data Placement Supported: Supported 00:07:42.753 00:07:42.753 Controller Memory Buffer Support 00:07:42.753 ================================ 00:07:42.753 Supported: No 00:07:42.753 00:07:42.753 Persistent Memory Region Support 00:07:42.753 ================================ 00:07:42.753 Supported: No 00:07:42.753 00:07:42.753 Admin Command Set Attributes 00:07:42.753 ============================ 00:07:42.753 Security Send/Receive: Not Supported 00:07:42.753 Format NVM: Supported 00:07:42.753 Firmware Activate/Download: Not Supported 00:07:42.753 Namespace Management: Supported 00:07:42.753 Device Self-Test: Not Supported 00:07:42.753 Directives: Supported 00:07:42.753 NVMe-MI: Not Supported 00:07:42.753 Virtualization Management: Not Supported 00:07:42.753 Doorbell Buffer Config: Supported 00:07:42.753 Get LBA Status Capability: Not Supported 00:07:42.753 Command & Feature Lockdown Capability: Not Supported 00:07:42.753 Abort Command Limit: 4 00:07:42.753 Async Event Request Limit: 4 00:07:42.753 Number of Firmware Slots: N/A 00:07:42.753 Firmware Slot 1 Read-Only: N/A 00:07:42.753 Firmware Activation Without Reset: N/A 00:07:42.753 Multiple Update Detection Support: N/A 00:07:42.753 Firmware Update Granularity: No Information Provided 00:07:42.753 Per-Namespace SMART Log: Yes 00:07:42.753 Asymmetric Namespace Access Log Page: Not Supported 00:07:42.753 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:42.753 Command Effects Log Page: Supported 00:07:42.753 Get Log Page Extended Data: Supported 00:07:42.753 Telemetry Log Pages: Not Supported 00:07:42.753 Persistent Event Log Pages: Not Supported 00:07:42.753 Supported Log Pages Log Page: May Support 00:07:42.753 Commands Supported & Effects Log Page: Not Supported 00:07:42.753 Feature Identifiers & Effects Log Page:May Support 00:07:42.753 NVMe-MI Commands & Effects Log Page: May Support 00:07:42.753 Data Area 4 for Telemetry Log: Not Supported 00:07:42.753 Error Log Page Entries Supported: 1 00:07:42.753 Keep Alive: Not Supported 00:07:42.753 00:07:42.753 NVM Command Set Attributes 00:07:42.753 ========================== 00:07:42.753 Submission Queue Entry Size 00:07:42.753 Max: 64 00:07:42.753 Min: 64 00:07:42.753 Completion Queue Entry Size 00:07:42.753 Max: 16 00:07:42.753 Min: 16 00:07:42.753 Number of Namespaces: 256 00:07:42.753 Compare Command: Supported 00:07:42.753 Write Uncorrectable Command: Not Supported 00:07:42.753 Dataset Management Command: Supported 00:07:42.753 Write Zeroes Command: Supported 00:07:42.753 Set Features Save Field: Supported 00:07:42.753 Reservations: Not Supported 00:07:42.753 Timestamp: Supported 00:07:42.753 Copy: Supported 00:07:42.754 Volatile Write Cache: Present 00:07:42.754 Atomic Write Unit (Normal): 1 00:07:42.754 Atomic Write Unit (PFail): 1 00:07:42.754 Atomic Compare & Write Unit: 1 00:07:42.754 Fused Compare & Write: Not Supported 00:07:42.754 Scatter-Gather List 00:07:42.754 SGL Command Set: Supported 00:07:42.754 SGL Keyed: Not Supported 00:07:42.754 SGL Bit Bucket Descriptor: Not Supported 00:07:42.754 SGL Metadata Pointer: Not Supported 00:07:42.754 Oversized SGL: Not Supported 00:07:42.754 SGL Metadata Address: Not Supported 00:07:42.754 SGL Offset: Not Supported 00:07:42.754 Transport SGL Data Block: Not Supported 00:07:42.754 Replay Protected Memory Block: Not Supported 00:07:42.754 00:07:42.754 Firmware Slot Information 00:07:42.754 ========================= 00:07:42.754 Active slot: 1 00:07:42.754 Slot 1 Firmware Revision: 1.0 00:07:42.754 00:07:42.754 00:07:42.754 Commands Supported and Effects 00:07:42.754 ============================== 00:07:42.754 Admin Commands 00:07:42.754 -------------- 00:07:42.754 Delete I/O Submission Queue (00h): Supported 00:07:42.754 Create I/O Submission Queue (01h): Supported 00:07:42.754 Get Log Page (02h): Supported 00:07:42.754 Delete I/O Completion Queue (04h): Supported 00:07:42.754 Create I/O Completion Queue (05h): Supported 00:07:42.754 Identify (06h): Supported 00:07:42.754 Abort (08h): Supported 00:07:42.754 Set Features (09h): Supported 00:07:42.754 Get Features (0Ah): Supported 00:07:42.754 Asynchronous Event Request (0Ch): Supported 00:07:42.754 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:42.754 Directive Send (19h): Supported 00:07:42.754 Directive Receive (1Ah): Supported 00:07:42.754 Virtualization Management (1Ch): Supported 00:07:42.754 Doorbell Buffer Config (7Ch): Supported 00:07:42.754 Format NVM (80h): Supported LBA-Change 00:07:42.754 I/O Commands 00:07:42.754 ------------ 00:07:42.754 Flush (00h): Supported LBA-Change 00:07:42.754 Write (01h): Supported LBA-Change 00:07:42.754 Read (02h): Supported 00:07:42.754 Compare (05h): Supported 00:07:42.754 Write Zeroes (08h): Supported LBA-Change 00:07:42.754 Dataset Management (09h): Supported LBA-Change 00:07:42.754 Unknown (0Ch): Supported 00:07:42.754 Unknown (12h): Supported 00:07:42.754 Copy (19h): Supported LBA-Change 00:07:42.754 Unknown (1Dh): Supported LBA-Change 00:07:42.754 00:07:42.754 Error Log 00:07:42.754 ========= 00:07:42.754 00:07:42.754 Arbitration 00:07:42.754 =========== 00:07:42.754 Arbitration Burst: no limit 00:07:42.754 00:07:42.754 Power Management 00:07:42.754 ================ 00:07:42.754 Number of Power States: 1 00:07:42.754 Current Power State: Power State #0 00:07:42.754 Power State #0: 00:07:42.754 Max Power: 25.00 W 00:07:42.754 Non-Operational State: Operational 00:07:42.754 Entry Latency: 16 microseconds 00:07:42.754 Exit Latency: 4 microseconds 00:07:42.754 Relative Read Throughput: 0 00:07:42.754 Relative Read Latency: 0 00:07:42.754 Relative Write Throughput: 0 00:07:42.754 Relative Write Latency: 0 00:07:42.754 Idle Power: Not Reported 00:07:42.754 Active Power: Not Reported 00:07:42.754 Non-Operational Permissive Mode: Not Supported 00:07:42.754 00:07:42.754 Health Information 00:07:42.754 ================== 00:07:42.754 Critical Warnings: 00:07:42.754 Available Spare Space: OK 00:07:42.754 Temperature: OK 00:07:42.754 Device Reliability: OK 00:07:42.754 Read Only: No 00:07:42.754 Volatile Memory Backup: OK 00:07:42.754 Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.754 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:42.754 Available Spare: 0% 00:07:42.754 Available Spare Threshold: 0% 00:07:42.754 Life Percentage Used: 0% 00:07:42.754 Data Units Read: 950 00:07:42.754 Data Units Written: 879 00:07:42.754 Host Read Commands: 41036 00:07:42.754 Host Write Commands: 40459 00:07:42.754 Controller Busy Time: 0 minutes 00:07:42.754 Power Cycles: 0 00:07:42.754 Power On Hours: 0 hours 00:07:42.754 Unsafe Shutdowns: 0 00:07:42.754 Unrecoverable Media Errors: 0 00:07:42.754 Lifetime Error Log Entries: 0 00:07:42.754 Warning Temperature Time: 0 minutes 00:07:42.754 Critical Temperature Time: 0 minutes 00:07:42.754 00:07:42.754 Number of Queues 00:07:42.754 ================ 00:07:42.754 Number of I/O Submission Queues: 64 00:07:42.754 Number of I/O Completion Queues: 64 00:07:42.754 00:07:42.754 ZNS Specific Controller Data 00:07:42.754 ============================ 00:07:42.754 Zone Append Size Limit: 0 00:07:42.754 00:07:42.754 00:07:42.754 Active Namespaces 00:07:42.754 ================= 00:07:42.754 Namespace ID:1 00:07:42.754 Error Recovery Timeout: Unlimited 00:07:42.754 Command Set Identifier: NVM (00h) 00:07:42.754 Deallocate: Supported 00:07:42.754 Deallocated/Unwritten Error: Supported 00:07:42.754 Deallocated Read Value: All 0x00 00:07:42.754 Deallocate in Write Zeroes: Not Supported 00:07:42.754 Deallocated Guard Field: 0xFFFF 00:07:42.754 Flush: Supported 00:07:42.754 Reservation: Not Supported 00:07:42.754 Namespace Sharing Capabilities: Multiple Controllers 00:07:42.754 Size (in LBAs): 262144 (1GiB) 00:07:42.754 Capacity (in LBAs): 262144 (1GiB) 00:07:42.754 Utilization (in LBAs): 262144 (1GiB) 00:07:42.754 Thin Provisioning: Not Supported 00:07:42.754 Per-NS Atomic Units: No 00:07:42.754 Maximum Single Source Range Length: 128 00:07:42.754 Maximum Copy Length: 128 00:07:42.754 Maximum Source Range Count: 128 00:07:42.754 NGUID/EUI64 Never Reused: No 00:07:42.754 Namespace Write Protected: No 00:07:42.754 Endurance group ID: 1 00:07:42.754 Number of LBA Formats: 8 00:07:42.754 Current LBA Format: LBA Format #04 00:07:42.754 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:42.754 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:42.754 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:42.754 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:42.754 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:42.754 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:42.754 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:42.754 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:42.754 00:07:42.754 Get Feature FDP: 00:07:42.754 ================ 00:07:42.754 Enabled: Yes 00:07:42.754 FDP configuration index: 0 00:07:42.754 00:07:42.754 FDP configurations log page 00:07:42.754 =========================== 00:07:42.754 Number of FDP configurations: 1 00:07:42.754 Version: 0 00:07:42.754 Size: 112 00:07:42.754 FDP Configuration Descriptor: 0 00:07:42.754 Descriptor Size: 96 00:07:42.754 Reclaim Group Identifier format: 2 00:07:42.754 FDP Volatile Write Cache: Not Present 00:07:42.754 FDP Configuration: Valid 00:07:42.754 Vendor Specific Size: 0 00:07:42.754 Number of Reclaim Groups: 2 00:07:42.754 Number of Recalim Unit Handles: 8 00:07:42.754 Max Placement Identifiers: 128 00:07:42.754 Number of Namespaces Suppprted: 256 00:07:42.754 Reclaim unit Nominal Size: 6000000 bytes 00:07:42.754 Estimated Reclaim Unit Time Limit: Not Reported 00:07:42.754 RUH Desc #000: RUH Type: Initially Isolated 00:07:42.754 RUH Desc #001: RUH Type: Initially Isolated 00:07:42.754 RUH Desc #002: RUH Type: Initially Isolated 00:07:42.754 RUH Desc #003: RUH Type: Initially Isolated 00:07:42.754 RUH Desc #004: RUH Type: Initially Isolated 00:07:42.754 RUH Desc #005: RUH Type: Initially Isolated 00:07:42.754 RUH Desc #006: RUH Type: Initially Isolated 00:07:42.754 RUH Desc #007: RUH Type: Initially Isolated 00:07:42.754 00:07:42.754 FDP reclaim unit handle usage log page 00:07:42.754 ==================================[2024-12-14 12:33:42.318591] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64554 terminated unexpected 00:07:42.754 ==== 00:07:42.754 Number of Reclaim Unit Handles: 8 00:07:42.754 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:42.754 RUH Usage Desc #001: RUH Attributes: Unused 00:07:42.754 RUH Usage Desc #002: RUH Attributes: Unused 00:07:42.754 RUH Usage Desc #003: RUH Attributes: Unused 00:07:42.754 RUH Usage Desc #004: RUH Attributes: Unused 00:07:42.754 RUH Usage Desc #005: RUH Attributes: Unused 00:07:42.754 RUH Usage Desc #006: RUH Attributes: Unused 00:07:42.754 RUH Usage Desc #007: RUH Attributes: Unused 00:07:42.754 00:07:42.754 FDP statistics log page 00:07:42.754 ======================= 00:07:42.754 Host bytes with metadata written: 527212544 00:07:42.754 Media bytes with metadata written: 527269888 00:07:42.754 Media bytes erased: 0 00:07:42.754 00:07:42.754 FDP events log page 00:07:42.754 =================== 00:07:42.754 Number of FDP events: 0 00:07:42.754 00:07:42.754 NVM Specific Namespace Data 00:07:42.754 =========================== 00:07:42.754 Logical Block Storage Tag Mask: 0 00:07:42.755 Protection Information Capabilities: 00:07:42.755 16b Guard Protection Information Storage Tag Support: No 00:07:42.755 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:42.755 Storage Tag Check Read Support: No 00:07:42.755 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.755 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.755 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.755 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.755 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.755 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.755 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.755 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.755 ===================================================== 00:07:42.755 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:42.755 ===================================================== 00:07:42.755 Controller Capabilities/Features 00:07:42.755 ================================ 00:07:42.755 Vendor ID: 1b36 00:07:42.755 Subsystem Vendor ID: 1af4 00:07:42.755 Serial Number: 12340 00:07:42.755 Model Number: QEMU NVMe Ctrl 00:07:42.755 Firmware Version: 8.0.0 00:07:42.755 Recommended Arb Burst: 6 00:07:42.755 IEEE OUI Identifier: 00 54 52 00:07:42.755 Multi-path I/O 00:07:42.755 May have multiple subsystem ports: No 00:07:42.755 May have multiple controllers: No 00:07:42.755 Associated with SR-IOV VF: No 00:07:42.755 Max Data Transfer Size: 524288 00:07:42.755 Max Number of Namespaces: 256 00:07:42.755 Max Number of I/O Queues: 64 00:07:42.755 NVMe Specification Version (VS): 1.4 00:07:42.755 NVMe Specification Version (Identify): 1.4 00:07:42.755 Maximum Queue Entries: 2048 00:07:42.755 Contiguous Queues Required: Yes 00:07:42.755 Arbitration Mechanisms Supported 00:07:42.755 Weighted Round Robin: Not Supported 00:07:42.755 Vendor Specific: Not Supported 00:07:42.755 Reset Timeout: 7500 ms 00:07:42.755 Doorbell Stride: 4 bytes 00:07:42.755 NVM Subsystem Reset: Not Supported 00:07:42.755 Command Sets Supported 00:07:42.755 NVM Command Set: Supported 00:07:42.755 Boot Partition: Not Supported 00:07:42.755 Memory Page Size Minimum: 4096 bytes 00:07:42.755 Memory Page Size Maximum: 65536 bytes 00:07:42.755 Persistent Memory Region: Not Supported 00:07:42.755 Optional Asynchronous Events Supported 00:07:42.755 Namespace Attribute Notices: Supported 00:07:42.755 Firmware Activation Notices: Not Supported 00:07:42.755 ANA Change Notices: Not Supported 00:07:42.755 PLE Aggregate Log Change Notices: Not Supported 00:07:42.755 LBA Status Info Alert Notices: Not Supported 00:07:42.755 EGE Aggregate Log Change Notices: Not Supported 00:07:42.755 Normal NVM Subsystem Shutdown event: Not Supported 00:07:42.755 Zone Descriptor Change Notices: Not Supported 00:07:42.755 Discovery Log Change Notices: Not Supported 00:07:42.755 Controller Attributes 00:07:42.755 128-bit Host Identifier: Not Supported 00:07:42.755 Non-Operational Permissive Mode: Not Supported 00:07:42.755 NVM Sets: Not Supported 00:07:42.755 Read Recovery Levels: Not Supported 00:07:42.755 Endurance Groups: Not Supported 00:07:42.755 Predictable Latency Mode: Not Supported 00:07:42.755 Traffic Based Keep ALive: Not Supported 00:07:42.755 Namespace Granularity: Not Supported 00:07:42.755 SQ Associations: Not Supported 00:07:42.755 UUID List: Not Supported 00:07:42.755 Multi-Domain Subsystem: Not Supported 00:07:42.755 Fixed Capacity Management: Not Supported 00:07:42.755 Variable Capacity Management: Not Supported 00:07:42.755 Delete Endurance Group: Not Supported 00:07:42.755 Delete NVM Set: Not Supported 00:07:42.755 Extended LBA Formats Supported: Supported 00:07:42.755 Flexible Data Placement Supported: Not Supported 00:07:42.755 00:07:42.755 Controller Memory Buffer Support 00:07:42.755 ================================ 00:07:42.755 Supported: No 00:07:42.755 00:07:42.755 Persistent Memory Region Support 00:07:42.755 ================================ 00:07:42.755 Supported: No 00:07:42.755 00:07:42.755 Admin Command Set Attributes 00:07:42.755 ============================ 00:07:42.755 Security Send/Receive: Not Supported 00:07:42.755 Format NVM: Supported 00:07:42.755 Firmware Activate/Download: Not Supported 00:07:42.755 Namespace Management: Supported 00:07:42.755 Device Self-Test: Not Supported 00:07:42.755 Directives: Supported 00:07:42.755 NVMe-MI: Not Supported 00:07:42.755 Virtualization Management: Not Supported 00:07:42.755 Doorbell Buffer Config: Supported 00:07:42.755 Get LBA Status Capability: Not Supported 00:07:42.755 Command & Feature Lockdown Capability: Not Supported 00:07:42.755 Abort Command Limit: 4 00:07:42.755 Async Event Request Limit: 4 00:07:42.755 Number of Firmware Slots: N/A 00:07:42.755 Firmware Slot 1 Read-Only: N/A 00:07:42.755 Firmware Activation Without Reset: N/A 00:07:42.755 Multiple Update Detection Support: N/A 00:07:42.755 Firmware Update Granularity: No Information Provided 00:07:42.755 Per-Namespace SMART Log: Yes 00:07:42.755 Asymmetric Namespace Access Log Page: Not Supported 00:07:42.755 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:42.755 Command Effects Log Page: Supported 00:07:42.755 Get Log Page Extended Data: Supported 00:07:42.755 Telemetry Log Pages: Not Supported 00:07:42.755 Persistent Event Log Pages: Not Supported 00:07:42.755 Supported Log Pages Log Page: May Support 00:07:42.755 Commands Supported & Effects Log Page: Not Supported 00:07:42.755 Feature Identifiers & Effects Log Page:May Support 00:07:42.755 NVMe-MI Commands & Effects Log Page: May Support 00:07:42.755 Data Area 4 for Telemetry Log: Not Supported 00:07:42.755 Error Log Page Entries Supported: 1 00:07:42.755 Keep Alive: Not Supported 00:07:42.755 00:07:42.755 NVM Command Set Attributes 00:07:42.755 ========================== 00:07:42.755 Submission Queue Entry Size 00:07:42.755 Max: 64 00:07:42.755 Min: 64 00:07:42.755 Completion Queue Entry Size 00:07:42.755 Max: 16 00:07:42.755 Min: 16 00:07:42.755 Number of Namespaces: 256 00:07:42.755 Compare Command: Supported 00:07:42.755 Write Uncorrectable Command: Not Supported 00:07:42.755 Dataset Management Command: Supported 00:07:42.755 Write Zeroes Command: Supported 00:07:42.755 Set Features Save Field: Supported 00:07:42.755 Reservations: Not Supported 00:07:42.755 Timestamp: Supported 00:07:42.755 Copy: Supported 00:07:42.755 Volatile Write Cache: Present 00:07:42.755 Atomic Write Unit (Normal): 1 00:07:42.755 Atomic Write Unit (PFail): 1 00:07:42.755 Atomic Compare & Write Unit: 1 00:07:42.755 Fused Compare & Write: Not Supported 00:07:42.755 Scatter-Gather List 00:07:42.755 SGL Command Set: Supported 00:07:42.755 SGL Keyed: Not Supported 00:07:42.755 SGL Bit Bucket Descriptor: Not Supported 00:07:42.755 SGL Metadata Pointer: Not Supported 00:07:42.755 Oversized SGL: Not Supported 00:07:42.755 SGL Metadata Address: Not Supported 00:07:42.755 SGL Offset: Not Supported 00:07:42.755 Transport SGL Data Block: Not Supported 00:07:42.755 Replay Protected Memory Block: Not Supported 00:07:42.755 00:07:42.755 Firmware Slot Information 00:07:42.755 ========================= 00:07:42.755 Active slot: 1 00:07:42.755 Slot 1 Firmware Revision: 1.0 00:07:42.755 00:07:42.755 00:07:42.755 Commands Supported and Effects 00:07:42.755 ============================== 00:07:42.755 Admin Commands 00:07:42.755 -------------- 00:07:42.755 Delete I/O Submission Queue (00h): Supported 00:07:42.755 Create I/O Submission Queue (01h): Supported 00:07:42.755 Get Log Page (02h): Supported 00:07:42.755 Delete I/O Completion Queue (04h): Supported 00:07:42.755 Create I/O Completion Queue (05h): Supported 00:07:42.755 Identify (06h): Supported 00:07:42.755 Abort (08h): Supported 00:07:42.755 Set Features (09h): Supported 00:07:42.755 Get Features (0Ah): Supported 00:07:42.755 Asynchronous Event Request (0Ch): Supported 00:07:42.755 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:42.755 Directive Send (19h): Supported 00:07:42.755 Directive Receive (1Ah): Supported 00:07:42.755 Virtualization Management (1Ch): Supported 00:07:42.755 Doorbell Buffer Config (7Ch): Supported 00:07:42.755 Format NVM (80h): Supported LBA-Change 00:07:42.755 I/O Commands 00:07:42.755 ------------ 00:07:42.755 Flush (00h): Supported LBA-Change 00:07:42.755 Write (01h): Supported LBA-Change 00:07:42.755 Read (02h): Supported 00:07:42.755 Compare (05h): Supported 00:07:42.755 Write Zeroes (08h): Supported LBA-Change 00:07:42.755 Dataset Management (09h): Supported LBA-Change 00:07:42.755 Unknown (0Ch): Supported 00:07:42.755 Unknown (12h): Supported 00:07:42.755 Copy (19h): Supported LBA-Change 00:07:42.755 Unknown (1Dh): Supported LBA-Change 00:07:42.755 00:07:42.755 Error Log 00:07:42.755 ========= 00:07:42.755 00:07:42.755 Arbitration 00:07:42.756 =========== 00:07:42.756 Arbitration Burst: no limit 00:07:42.756 00:07:42.756 Power Management 00:07:42.756 ================ 00:07:42.756 Number of Power States: 1 00:07:42.756 Current Power State: Power State #0 00:07:42.756 Power State #0: 00:07:42.756 Max Power: 25.00 W 00:07:42.756 Non-Operational State: Operational 00:07:42.756 Entry Latency: 16 microseconds 00:07:42.756 Exit Latency: 4 microseconds 00:07:42.756 Relative Read Throughput: 0 00:07:42.756 Relative Read Latency: 0 00:07:42.756 Relative Write Throughput: 0 00:07:42.756 Relative Write Latency: 0 00:07:42.756 Idle Power: Not Reported 00:07:42.756 Active Power: Not Reported 00:07:42.756 Non-Operational Permissive Mode: Not Supported 00:07:42.756 00:07:42.756 Health Information 00:07:42.756 ================== 00:07:42.756 Critical Warnings: 00:07:42.756 Available Spare Space: OK 00:07:42.756 Temperature: OK 00:07:42.756 Device Reliability: OK 00:07:42.756 Read Only: No 00:07:42.756 Volatile Memory Backup: OK 00:07:42.756 Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.756 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:42.756 Available Spare: 0% 00:07:42.756 Available Spare Threshold: 0% 00:07:42.756 Life Percentage Used: 0% 00:07:42.756 Data Units Read: 705 00:07:42.756 Data Units Written: 633 00:07:42.756 Host Read Commands: 38707 00:07:42.756 Host Write Commands: 38493 00:07:42.756 Controller Busy Time: 0 minutes 00:07:42.756 Power Cycles: 0 00:07:42.756 Power On Hours: 0 hours 00:07:42.756 Unsafe Shutdowns: 0 00:07:42.756 Unrecoverable Media Errors: 0 00:07:42.756 Lifetime Error Log Entries: 0 00:07:42.756 Warning Temperature Time: 0 minutes 00:07:42.756 Critical Temperature Time: 0 minutes 00:07:42.756 00:07:42.756 Number of Queues 00:07:42.756 ================ 00:07:42.756 Number of I/O Submission Queues: 64 00:07:42.756 Number of I/O Completion Queues: 64 00:07:42.756 00:07:42.756 ZNS Specific Controller Data 00:07:42.756 ============================ 00:07:42.756 Zone Append Size Limit: 0 00:07:42.756 00:07:42.756 00:07:42.756 Active Namespaces 00:07:42.756 ================= 00:07:42.756 Namespace ID:1 00:07:42.756 Error Recovery Timeout: Unlimited 00:07:42.756 Command Set Identifier: NVM (00h) 00:07:42.756 Deallocate: Supported 00:07:42.756 Deallocated/Unwritten Error: Supported 00:07:42.756 Deallocated Read Value: All 0x00 00:07:42.756 Deallocate in Write Zeroes: Not Supported 00:07:42.756 Deallocated Guard Field: 0xFFFF 00:07:42.756 Flush: Supported 00:07:42.756 Reservation: Not Supported 00:07:42.756 Metadata Transferred as: Separate Metadata Buffer 00:07:42.756 Namespace Sharing Capabilities: Private 00:07:42.756 Size (in LBAs): 1548666 (5GiB) 00:07:42.756 Capacity (in LBAs): 1548666 (5GiB) 00:07:42.756 Utilization (in LBAs): 1548666 (5GiB) 00:07:42.756 Thin Provisioning: Not Supported 00:07:42.756 Per-NS Atomic Units: No 00:07:42.756 Maximum Single Source Range Length: 128 00:07:42.756 Maximum Copy Length: 128 00:07:42.756 Maximum Source Range Count: 128 00:07:42.756 NGUID/EUI64 Never Reused: No 00:07:42.756 Namespace Write Protected: No 00:07:42.756 Number of LBA Formats: 8 00:07:42.756 Current LBA Format: [2024-12-14 12:33:42.320470] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64554 terminated unexpected 00:07:42.756 LBA Format #07 00:07:42.756 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:42.756 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:42.756 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:42.756 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:42.756 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:42.756 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:42.756 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:42.756 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:42.756 00:07:42.756 NVM Specific Namespace Data 00:07:42.756 =========================== 00:07:42.756 Logical Block Storage Tag Mask: 0 00:07:42.756 Protection Information Capabilities: 00:07:42.756 16b Guard Protection Information Storage Tag Support: No 00:07:42.756 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:42.756 Storage Tag Check Read Support: No 00:07:42.756 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.756 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.756 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.756 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.756 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.756 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.756 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.756 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.756 ===================================================== 00:07:42.756 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:42.756 ===================================================== 00:07:42.756 Controller Capabilities/Features 00:07:42.756 ================================ 00:07:42.756 Vendor ID: 1b36 00:07:42.756 Subsystem Vendor ID: 1af4 00:07:42.756 Serial Number: 12341 00:07:42.756 Model Number: QEMU NVMe Ctrl 00:07:42.756 Firmware Version: 8.0.0 00:07:42.756 Recommended Arb Burst: 6 00:07:42.756 IEEE OUI Identifier: 00 54 52 00:07:42.756 Multi-path I/O 00:07:42.756 May have multiple subsystem ports: No 00:07:42.756 May have multiple controllers: No 00:07:42.756 Associated with SR-IOV VF: No 00:07:42.756 Max Data Transfer Size: 524288 00:07:42.756 Max Number of Namespaces: 256 00:07:42.756 Max Number of I/O Queues: 64 00:07:42.756 NVMe Specification Version (VS): 1.4 00:07:42.756 NVMe Specification Version (Identify): 1.4 00:07:42.756 Maximum Queue Entries: 2048 00:07:42.756 Contiguous Queues Required: Yes 00:07:42.756 Arbitration Mechanisms Supported 00:07:42.756 Weighted Round Robin: Not Supported 00:07:42.756 Vendor Specific: Not Supported 00:07:42.756 Reset Timeout: 7500 ms 00:07:42.756 Doorbell Stride: 4 bytes 00:07:42.756 NVM Subsystem Reset: Not Supported 00:07:42.756 Command Sets Supported 00:07:42.756 NVM Command Set: Supported 00:07:42.756 Boot Partition: Not Supported 00:07:42.756 Memory Page Size Minimum: 4096 bytes 00:07:42.756 Memory Page Size Maximum: 65536 bytes 00:07:42.756 Persistent Memory Region: Not Supported 00:07:42.756 Optional Asynchronous Events Supported 00:07:42.756 Namespace Attribute Notices: Supported 00:07:42.756 Firmware Activation Notices: Not Supported 00:07:42.756 ANA Change Notices: Not Supported 00:07:42.756 PLE Aggregate Log Change Notices: Not Supported 00:07:42.756 LBA Status Info Alert Notices: Not Supported 00:07:42.756 EGE Aggregate Log Change Notices: Not Supported 00:07:42.756 Normal NVM Subsystem Shutdown event: Not Supported 00:07:42.756 Zone Descriptor Change Notices: Not Supported 00:07:42.756 Discovery Log Change Notices: Not Supported 00:07:42.756 Controller Attributes 00:07:42.756 128-bit Host Identifier: Not Supported 00:07:42.756 Non-Operational Permissive Mode: Not Supported 00:07:42.756 NVM Sets: Not Supported 00:07:42.756 Read Recovery Levels: Not Supported 00:07:42.756 Endurance Groups: Not Supported 00:07:42.756 Predictable Latency Mode: Not Supported 00:07:42.756 Traffic Based Keep ALive: Not Supported 00:07:42.756 Namespace Granularity: Not Supported 00:07:42.756 SQ Associations: Not Supported 00:07:42.756 UUID List: Not Supported 00:07:42.756 Multi-Domain Subsystem: Not Supported 00:07:42.756 Fixed Capacity Management: Not Supported 00:07:42.756 Variable Capacity Management: Not Supported 00:07:42.756 Delete Endurance Group: Not Supported 00:07:42.756 Delete NVM Set: Not Supported 00:07:42.756 Extended LBA Formats Supported: Supported 00:07:42.756 Flexible Data Placement Supported: Not Supported 00:07:42.756 00:07:42.756 Controller Memory Buffer Support 00:07:42.756 ================================ 00:07:42.756 Supported: No 00:07:42.756 00:07:42.756 Persistent Memory Region Support 00:07:42.756 ================================ 00:07:42.756 Supported: No 00:07:42.756 00:07:42.756 Admin Command Set Attributes 00:07:42.756 ============================ 00:07:42.756 Security Send/Receive: Not Supported 00:07:42.756 Format NVM: Supported 00:07:42.756 Firmware Activate/Download: Not Supported 00:07:42.756 Namespace Management: Supported 00:07:42.756 Device Self-Test: Not Supported 00:07:42.756 Directives: Supported 00:07:42.756 NVMe-MI: Not Supported 00:07:42.756 Virtualization Management: Not Supported 00:07:42.756 Doorbell Buffer Config: Supported 00:07:42.756 Get LBA Status Capability: Not Supported 00:07:42.756 Command & Feature Lockdown Capability: Not Supported 00:07:42.756 Abort Command Limit: 4 00:07:42.756 Async Event Request Limit: 4 00:07:42.756 Number of Firmware Slots: N/A 00:07:42.756 Firmware Slot 1 Read-Only: N/A 00:07:42.756 Firmware Activation Without Reset: N/A 00:07:42.756 Multiple Update Detection Support: N/A 00:07:42.756 Firmware Update Granularity: No Information Provided 00:07:42.756 Per-Namespace SMART Log: Yes 00:07:42.756 Asymmetric Namespace Access Log Page: Not Supported 00:07:42.756 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:42.756 Command Effects Log Page: Supported 00:07:42.757 Get Log Page Extended Data: Supported 00:07:42.757 Telemetry Log Pages: Not Supported 00:07:42.757 Persistent Event Log Pages: Not Supported 00:07:42.757 Supported Log Pages Log Page: May Support 00:07:42.757 Commands Supported & Effects Log Page: Not Supported 00:07:42.757 Feature Identifiers & Effects Log Page:May Support 00:07:42.757 NVMe-MI Commands & Effects Log Page: May Support 00:07:42.757 Data Area 4 for Telemetry Log: Not Supported 00:07:42.757 Error Log Page Entries Supported: 1 00:07:42.757 Keep Alive: Not Supported 00:07:42.757 00:07:42.757 NVM Command Set Attributes 00:07:42.757 ========================== 00:07:42.757 Submission Queue Entry Size 00:07:42.757 Max: 64 00:07:42.757 Min: 64 00:07:42.757 Completion Queue Entry Size 00:07:42.757 Max: 16 00:07:42.757 Min: 16 00:07:42.757 Number of Namespaces: 256 00:07:42.757 Compare Command: Supported 00:07:42.757 Write Uncorrectable Command: Not Supported 00:07:42.757 Dataset Management Command: Supported 00:07:42.757 Write Zeroes Command: Supported 00:07:42.757 Set Features Save Field: Supported 00:07:42.757 Reservations: Not Supported 00:07:42.757 Timestamp: Supported 00:07:42.757 Copy: Supported 00:07:42.757 Volatile Write Cache: Present 00:07:42.757 Atomic Write Unit (Normal): 1 00:07:42.757 Atomic Write Unit (PFail): 1 00:07:42.757 Atomic Compare & Write Unit: 1 00:07:42.757 Fused Compare & Write: Not Supported 00:07:42.757 Scatter-Gather List 00:07:42.757 SGL Command Set: Supported 00:07:42.757 SGL Keyed: Not Supported 00:07:42.757 SGL Bit Bucket Descriptor: Not Supported 00:07:42.757 SGL Metadata Pointer: Not Supported 00:07:42.757 Oversized SGL: Not Supported 00:07:42.757 SGL Metadata Address: Not Supported 00:07:42.757 SGL Offset: Not Supported 00:07:42.757 Transport SGL Data Block: Not Supported 00:07:42.757 Replay Protected Memory Block: Not Supported 00:07:42.757 00:07:42.757 Firmware Slot Information 00:07:42.757 ========================= 00:07:42.757 Active slot: 1 00:07:42.757 Slot 1 Firmware Revision: 1.0 00:07:42.757 00:07:42.757 00:07:42.757 Commands Supported and Effects 00:07:42.757 ============================== 00:07:42.757 Admin Commands 00:07:42.757 -------------- 00:07:42.757 Delete I/O Submission Queue (00h): Supported 00:07:42.757 Create I/O Submission Queue (01h): Supported 00:07:42.757 Get Log Page (02h): Supported 00:07:42.757 Delete I/O Completion Queue (04h): Supported 00:07:42.757 Create I/O Completion Queue (05h): Supported 00:07:42.757 Identify (06h): Supported 00:07:42.757 Abort (08h): Supported 00:07:42.757 Set Features (09h): Supported 00:07:42.757 Get Features (0Ah): Supported 00:07:42.757 Asynchronous Event Request (0Ch): Supported 00:07:42.757 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:42.757 Directive Send (19h): Supported 00:07:42.757 Directive Receive (1Ah): Supported 00:07:42.757 Virtualization Management (1Ch): Supported 00:07:42.757 Doorbell Buffer Config (7Ch): Supported 00:07:42.757 Format NVM (80h): Supported LBA-Change 00:07:42.757 I/O Commands 00:07:42.757 ------------ 00:07:42.757 Flush (00h): Supported LBA-Change 00:07:42.757 Write (01h): Supported LBA-Change 00:07:42.757 Read (02h): Supported 00:07:42.757 Compare (05h): Supported 00:07:42.757 Write Zeroes (08h): Supported LBA-Change 00:07:42.757 Dataset Management (09h): Supported LBA-Change 00:07:42.757 Unknown (0Ch): Supported 00:07:42.757 Unknown (12h): Supported 00:07:42.757 Copy (19h): Supported LBA-Change 00:07:42.757 Unknown (1Dh): Supported LBA-Change 00:07:42.757 00:07:42.757 Error Log 00:07:42.757 ========= 00:07:42.757 00:07:42.757 Arbitration 00:07:42.757 =========== 00:07:42.757 Arbitration Burst: no limit 00:07:42.757 00:07:42.757 Power Management 00:07:42.757 ================ 00:07:42.757 Number of Power States: 1 00:07:42.757 Current Power State: Power State #0 00:07:42.757 Power State #0: 00:07:42.757 Max Power: 25.00 W 00:07:42.757 Non-Operational State: Operational 00:07:42.757 Entry Latency: 16 microseconds 00:07:42.757 Exit Latency: 4 microseconds 00:07:42.757 Relative Read Throughput: 0 00:07:42.757 Relative Read Latency: 0 00:07:42.757 Relative Write Throughput: 0 00:07:42.757 Relative Write Latency: 0 00:07:42.757 Idle Power: Not Reported 00:07:42.757 Active Power: Not Reported 00:07:42.757 Non-Operational Permissive Mode: Not Supported 00:07:42.757 00:07:42.757 Health Information 00:07:42.757 ================== 00:07:42.757 Critical Warnings: 00:07:42.757 Available Spare Space: OK 00:07:42.757 Temperature: OK 00:07:42.757 Device Reliability: OK 00:07:42.757 Read Only: No 00:07:42.757 Volatile Memory Backup: OK 00:07:42.757 Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.757 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:42.757 Available Spare: 0% 00:07:42.757 Available Spare Threshold: 0% 00:07:42.757 Life Percentage Used: 0% 00:07:42.757 Data Units Read: 1078 00:07:42.757 Data Units Written: 948 00:07:42.757 Host Read Commands: 56884 00:07:42.757 Host Write Commands: 55719 00:07:42.757 Controller Busy Time: 0 minutes 00:07:42.757 Power Cycles: 0 00:07:42.757 Power On Hours: 0 hours 00:07:42.757 Unsafe Shutdowns: 0 00:07:42.757 Unrecoverable Media Errors: 0 00:07:42.757 Lifetime Error Log Entries: 0 00:07:42.757 Warning Temperature Time: 0 minutes 00:07:42.757 Critical Temperature Time: 0 minutes 00:07:42.757 00:07:42.757 Number of Queues 00:07:42.757 ================ 00:07:42.757 Number of I/O Submission Queues: 64 00:07:42.757 Number of I/O Completion Queues: 64 00:07:42.757 00:07:42.757 ZNS Specific Controller Data 00:07:42.757 ============================ 00:07:42.757 Zone Append Size Limit: 0 00:07:42.757 00:07:42.757 00:07:42.757 Active Namespaces 00:07:42.757 ================= 00:07:42.757 Namespace ID:1 00:07:42.757 Error Recovery Timeout: Unlimited 00:07:42.757 Command Set Identifier: NVM (00h) 00:07:42.757 Deallocate: Supported 00:07:42.757 Deallocated/Unwritten Error: Supported 00:07:42.757 Deallocated Read Value: All 0x00 00:07:42.757 Deallocate in Write Zeroes: Not Supported 00:07:42.757 Deallocated Guard Field: 0xFFFF 00:07:42.757 Flush: Supported 00:07:42.757 Reservation: Not Supported 00:07:42.757 Namespace Sharing Capabilities: Private 00:07:42.757 Size (in LBAs): 1310720 (5GiB) 00:07:42.757 Capacity (in LBAs): 1310720 (5GiB) 00:07:42.757 Utilization (in LBAs): 1310720 (5GiB) 00:07:42.757 Thin Provisioning: Not Supported 00:07:42.757 Per-NS Atomic Units: No 00:07:42.757 Maximum Single Source Range Length: 128 00:07:42.757 Maximum Copy Length: 128 00:07:42.757 Maximum Source Range Count: 128 00:07:42.757 NGUID/EUI64 Never Reused: No 00:07:42.757 Namespace Write Protected: No 00:07:42.757 Number of LBA Formats: 8 00:07:42.757 Current LBA Format: LBA Format #04 00:07:42.757 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:42.757 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:42.757 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:42.757 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:42.757 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:42.757 LBA Forma[2024-12-14 12:33:42.322097] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64554 terminated unexpected 00:07:42.757 t #05: Data Size: 4096 Metadata Size: 8 00:07:42.757 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:42.757 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:42.757 00:07:42.757 NVM Specific Namespace Data 00:07:42.757 =========================== 00:07:42.757 Logical Block Storage Tag Mask: 0 00:07:42.757 Protection Information Capabilities: 00:07:42.757 16b Guard Protection Information Storage Tag Support: No 00:07:42.757 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:42.757 Storage Tag Check Read Support: No 00:07:42.757 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.757 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.757 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.757 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.757 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.757 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.757 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.757 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.757 ===================================================== 00:07:42.757 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:42.757 ===================================================== 00:07:42.757 Controller Capabilities/Features 00:07:42.757 ================================ 00:07:42.757 Vendor ID: 1b36 00:07:42.757 Subsystem Vendor ID: 1af4 00:07:42.757 Serial Number: 12342 00:07:42.757 Model Number: QEMU NVMe Ctrl 00:07:42.757 Firmware Version: 8.0.0 00:07:42.757 Recommended Arb Burst: 6 00:07:42.757 IEEE OUI Identifier: 00 54 52 00:07:42.757 Multi-path I/O 00:07:42.757 May have multiple subsystem ports: No 00:07:42.758 May have multiple controllers: No 00:07:42.758 Associated with SR-IOV VF: No 00:07:42.758 Max Data Transfer Size: 524288 00:07:42.758 Max Number of Namespaces: 256 00:07:42.758 Max Number of I/O Queues: 64 00:07:42.758 NVMe Specification Version (VS): 1.4 00:07:42.758 NVMe Specification Version (Identify): 1.4 00:07:42.758 Maximum Queue Entries: 2048 00:07:42.758 Contiguous Queues Required: Yes 00:07:42.758 Arbitration Mechanisms Supported 00:07:42.758 Weighted Round Robin: Not Supported 00:07:42.758 Vendor Specific: Not Supported 00:07:42.758 Reset Timeout: 7500 ms 00:07:42.758 Doorbell Stride: 4 bytes 00:07:42.758 NVM Subsystem Reset: Not Supported 00:07:42.758 Command Sets Supported 00:07:42.758 NVM Command Set: Supported 00:07:42.758 Boot Partition: Not Supported 00:07:42.758 Memory Page Size Minimum: 4096 bytes 00:07:42.758 Memory Page Size Maximum: 65536 bytes 00:07:42.758 Persistent Memory Region: Not Supported 00:07:42.758 Optional Asynchronous Events Supported 00:07:42.758 Namespace Attribute Notices: Supported 00:07:42.758 Firmware Activation Notices: Not Supported 00:07:42.758 ANA Change Notices: Not Supported 00:07:42.758 PLE Aggregate Log Change Notices: Not Supported 00:07:42.758 LBA Status Info Alert Notices: Not Supported 00:07:42.758 EGE Aggregate Log Change Notices: Not Supported 00:07:42.758 Normal NVM Subsystem Shutdown event: Not Supported 00:07:42.758 Zone Descriptor Change Notices: Not Supported 00:07:42.758 Discovery Log Change Notices: Not Supported 00:07:42.758 Controller Attributes 00:07:42.758 128-bit Host Identifier: Not Supported 00:07:42.758 Non-Operational Permissive Mode: Not Supported 00:07:42.758 NVM Sets: Not Supported 00:07:42.758 Read Recovery Levels: Not Supported 00:07:42.758 Endurance Groups: Not Supported 00:07:42.758 Predictable Latency Mode: Not Supported 00:07:42.758 Traffic Based Keep ALive: Not Supported 00:07:42.758 Namespace Granularity: Not Supported 00:07:42.758 SQ Associations: Not Supported 00:07:42.758 UUID List: Not Supported 00:07:42.758 Multi-Domain Subsystem: Not Supported 00:07:42.758 Fixed Capacity Management: Not Supported 00:07:42.758 Variable Capacity Management: Not Supported 00:07:42.758 Delete Endurance Group: Not Supported 00:07:42.758 Delete NVM Set: Not Supported 00:07:42.758 Extended LBA Formats Supported: Supported 00:07:42.758 Flexible Data Placement Supported: Not Supported 00:07:42.758 00:07:42.758 Controller Memory Buffer Support 00:07:42.758 ================================ 00:07:42.758 Supported: No 00:07:42.758 00:07:42.758 Persistent Memory Region Support 00:07:42.758 ================================ 00:07:42.758 Supported: No 00:07:42.758 00:07:42.758 Admin Command Set Attributes 00:07:42.758 ============================ 00:07:42.758 Security Send/Receive: Not Supported 00:07:42.758 Format NVM: Supported 00:07:42.758 Firmware Activate/Download: Not Supported 00:07:42.758 Namespace Management: Supported 00:07:42.758 Device Self-Test: Not Supported 00:07:42.758 Directives: Supported 00:07:42.758 NVMe-MI: Not Supported 00:07:42.758 Virtualization Management: Not Supported 00:07:42.758 Doorbell Buffer Config: Supported 00:07:42.758 Get LBA Status Capability: Not Supported 00:07:42.758 Command & Feature Lockdown Capability: Not Supported 00:07:42.758 Abort Command Limit: 4 00:07:42.758 Async Event Request Limit: 4 00:07:42.758 Number of Firmware Slots: N/A 00:07:42.758 Firmware Slot 1 Read-Only: N/A 00:07:42.758 Firmware Activation Without Reset: N/A 00:07:42.758 Multiple Update Detection Support: N/A 00:07:42.758 Firmware Update Granularity: No Information Provided 00:07:42.758 Per-Namespace SMART Log: Yes 00:07:42.758 Asymmetric Namespace Access Log Page: Not Supported 00:07:42.758 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:42.758 Command Effects Log Page: Supported 00:07:42.758 Get Log Page Extended Data: Supported 00:07:42.758 Telemetry Log Pages: Not Supported 00:07:42.758 Persistent Event Log Pages: Not Supported 00:07:42.758 Supported Log Pages Log Page: May Support 00:07:42.758 Commands Supported & Effects Log Page: Not Supported 00:07:42.758 Feature Identifiers & Effects Log Page:May Support 00:07:42.758 NVMe-MI Commands & Effects Log Page: May Support 00:07:42.758 Data Area 4 for Telemetry Log: Not Supported 00:07:42.758 Error Log Page Entries Supported: 1 00:07:42.758 Keep Alive: Not Supported 00:07:42.758 00:07:42.758 NVM Command Set Attributes 00:07:42.758 ========================== 00:07:42.758 Submission Queue Entry Size 00:07:42.758 Max: 64 00:07:42.758 Min: 64 00:07:42.758 Completion Queue Entry Size 00:07:42.758 Max: 16 00:07:42.758 Min: 16 00:07:42.758 Number of Namespaces: 256 00:07:42.758 Compare Command: Supported 00:07:42.758 Write Uncorrectable Command: Not Supported 00:07:42.758 Dataset Management Command: Supported 00:07:42.758 Write Zeroes Command: Supported 00:07:42.758 Set Features Save Field: Supported 00:07:42.758 Reservations: Not Supported 00:07:42.758 Timestamp: Supported 00:07:42.758 Copy: Supported 00:07:42.758 Volatile Write Cache: Present 00:07:42.758 Atomic Write Unit (Normal): 1 00:07:42.758 Atomic Write Unit (PFail): 1 00:07:42.758 Atomic Compare & Write Unit: 1 00:07:42.758 Fused Compare & Write: Not Supported 00:07:42.758 Scatter-Gather List 00:07:42.758 SGL Command Set: Supported 00:07:42.758 SGL Keyed: Not Supported 00:07:42.758 SGL Bit Bucket Descriptor: Not Supported 00:07:42.758 SGL Metadata Pointer: Not Supported 00:07:42.758 Oversized SGL: Not Supported 00:07:42.758 SGL Metadata Address: Not Supported 00:07:42.758 SGL Offset: Not Supported 00:07:42.758 Transport SGL Data Block: Not Supported 00:07:42.758 Replay Protected Memory Block: Not Supported 00:07:42.758 00:07:42.758 Firmware Slot Information 00:07:42.758 ========================= 00:07:42.758 Active slot: 1 00:07:42.758 Slot 1 Firmware Revision: 1.0 00:07:42.758 00:07:42.758 00:07:42.758 Commands Supported and Effects 00:07:42.758 ============================== 00:07:42.758 Admin Commands 00:07:42.758 -------------- 00:07:42.758 Delete I/O Submission Queue (00h): Supported 00:07:42.758 Create I/O Submission Queue (01h): Supported 00:07:42.758 Get Log Page (02h): Supported 00:07:42.758 Delete I/O Completion Queue (04h): Supported 00:07:42.758 Create I/O Completion Queue (05h): Supported 00:07:42.758 Identify (06h): Supported 00:07:42.758 Abort (08h): Supported 00:07:42.758 Set Features (09h): Supported 00:07:42.758 Get Features (0Ah): Supported 00:07:42.758 Asynchronous Event Request (0Ch): Supported 00:07:42.758 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:42.758 Directive Send (19h): Supported 00:07:42.758 Directive Receive (1Ah): Supported 00:07:42.758 Virtualization Management (1Ch): Supported 00:07:42.758 Doorbell Buffer Config (7Ch): Supported 00:07:42.758 Format NVM (80h): Supported LBA-Change 00:07:42.758 I/O Commands 00:07:42.758 ------------ 00:07:42.758 Flush (00h): Supported LBA-Change 00:07:42.758 Write (01h): Supported LBA-Change 00:07:42.758 Read (02h): Supported 00:07:42.758 Compare (05h): Supported 00:07:42.758 Write Zeroes (08h): Supported LBA-Change 00:07:42.758 Dataset Management (09h): Supported LBA-Change 00:07:42.758 Unknown (0Ch): Supported 00:07:42.758 Unknown (12h): Supported 00:07:42.758 Copy (19h): Supported LBA-Change 00:07:42.758 Unknown (1Dh): Supported LBA-Change 00:07:42.758 00:07:42.758 Error Log 00:07:42.758 ========= 00:07:42.758 00:07:42.758 Arbitration 00:07:42.758 =========== 00:07:42.758 Arbitration Burst: no limit 00:07:42.758 00:07:42.758 Power Management 00:07:42.759 ================ 00:07:42.759 Number of Power States: 1 00:07:42.759 Current Power State: Power State #0 00:07:42.759 Power State #0: 00:07:42.759 Max Power: 25.00 W 00:07:42.759 Non-Operational State: Operational 00:07:42.759 Entry Latency: 16 microseconds 00:07:42.759 Exit Latency: 4 microseconds 00:07:42.759 Relative Read Throughput: 0 00:07:42.759 Relative Read Latency: 0 00:07:42.759 Relative Write Throughput: 0 00:07:42.759 Relative Write Latency: 0 00:07:42.759 Idle Power: Not Reported 00:07:42.759 Active Power: Not Reported 00:07:42.759 Non-Operational Permissive Mode: Not Supported 00:07:42.759 00:07:42.759 Health Information 00:07:42.759 ================== 00:07:42.759 Critical Warnings: 00:07:42.759 Available Spare Space: OK 00:07:42.759 Temperature: OK 00:07:42.759 Device Reliability: OK 00:07:42.759 Read Only: No 00:07:42.759 Volatile Memory Backup: OK 00:07:42.759 Current Temperature: 323 Kelvin (50 Celsius) 00:07:42.759 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:42.759 Available Spare: 0% 00:07:42.759 Available Spare Threshold: 0% 00:07:42.759 Life Percentage Used: 0% 00:07:42.759 Data Units Read: 2281 00:07:42.759 Data Units Written: 2068 00:07:42.759 Host Read Commands: 118199 00:07:42.759 Host Write Commands: 116469 00:07:42.759 Controller Busy Time: 0 minutes 00:07:42.759 Power Cycles: 0 00:07:42.759 Power On Hours: 0 hours 00:07:42.759 Unsafe Shutdowns: 0 00:07:42.759 Unrecoverable Media Errors: 0 00:07:42.759 Lifetime Error Log Entries: 0 00:07:42.759 Warning Temperature Time: 0 minutes 00:07:42.759 Critical Temperature Time: 0 minutes 00:07:42.759 00:07:42.759 Number of Queues 00:07:42.759 ================ 00:07:42.759 Number of I/O Submission Queues: 64 00:07:42.759 Number of I/O Completion Queues: 64 00:07:42.759 00:07:42.759 ZNS Specific Controller Data 00:07:42.759 ============================ 00:07:42.759 Zone Append Size Limit: 0 00:07:42.759 00:07:42.759 00:07:42.759 Active Namespaces 00:07:42.759 ================= 00:07:42.759 Namespace ID:1 00:07:42.759 Error Recovery Timeout: Unlimited 00:07:42.759 Command Set Identifier: NVM (00h) 00:07:42.759 Deallocate: Supported 00:07:42.759 Deallocated/Unwritten Error: Supported 00:07:42.759 Deallocated Read Value: All 0x00 00:07:42.759 Deallocate in Write Zeroes: Not Supported 00:07:42.759 Deallocated Guard Field: 0xFFFF 00:07:42.759 Flush: Supported 00:07:42.759 Reservation: Not Supported 00:07:42.759 Namespace Sharing Capabilities: Private 00:07:42.759 Size (in LBAs): 1048576 (4GiB) 00:07:42.759 Capacity (in LBAs): 1048576 (4GiB) 00:07:42.759 Utilization (in LBAs): 1048576 (4GiB) 00:07:42.759 Thin Provisioning: Not Supported 00:07:42.759 Per-NS Atomic Units: No 00:07:42.759 Maximum Single Source Range Length: 128 00:07:42.759 Maximum Copy Length: 128 00:07:42.759 Maximum Source Range Count: 128 00:07:42.759 NGUID/EUI64 Never Reused: No 00:07:42.759 Namespace Write Protected: No 00:07:42.759 Number of LBA Formats: 8 00:07:42.759 Current LBA Format: LBA Format #04 00:07:42.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:42.759 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:42.759 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:42.759 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:42.759 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:42.759 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:42.759 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:42.759 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:42.759 00:07:42.759 NVM Specific Namespace Data 00:07:42.759 =========================== 00:07:42.759 Logical Block Storage Tag Mask: 0 00:07:42.759 Protection Information Capabilities: 00:07:42.759 16b Guard Protection Information Storage Tag Support: No 00:07:42.759 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:42.759 Storage Tag Check Read Support: No 00:07:42.759 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Namespace ID:2 00:07:42.759 Error Recovery Timeout: Unlimited 00:07:42.759 Command Set Identifier: NVM (00h) 00:07:42.759 Deallocate: Supported 00:07:42.759 Deallocated/Unwritten Error: Supported 00:07:42.759 Deallocated Read Value: All 0x00 00:07:42.759 Deallocate in Write Zeroes: Not Supported 00:07:42.759 Deallocated Guard Field: 0xFFFF 00:07:42.759 Flush: Supported 00:07:42.759 Reservation: Not Supported 00:07:42.759 Namespace Sharing Capabilities: Private 00:07:42.759 Size (in LBAs): 1048576 (4GiB) 00:07:42.759 Capacity (in LBAs): 1048576 (4GiB) 00:07:42.759 Utilization (in LBAs): 1048576 (4GiB) 00:07:42.759 Thin Provisioning: Not Supported 00:07:42.759 Per-NS Atomic Units: No 00:07:42.759 Maximum Single Source Range Length: 128 00:07:42.759 Maximum Copy Length: 128 00:07:42.759 Maximum Source Range Count: 128 00:07:42.759 NGUID/EUI64 Never Reused: No 00:07:42.759 Namespace Write Protected: No 00:07:42.759 Number of LBA Formats: 8 00:07:42.759 Current LBA Format: LBA Format #04 00:07:42.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:42.759 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:42.759 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:42.759 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:42.759 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:42.759 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:42.759 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:42.759 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:42.759 00:07:42.759 NVM Specific Namespace Data 00:07:42.759 =========================== 00:07:42.759 Logical Block Storage Tag Mask: 0 00:07:42.759 Protection Information Capabilities: 00:07:42.759 16b Guard Protection Information Storage Tag Support: No 00:07:42.759 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:42.759 Storage Tag Check Read Support: No 00:07:42.759 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Namespace ID:3 00:07:42.759 Error Recovery Timeout: Unlimited 00:07:42.759 Command Set Identifier: NVM (00h) 00:07:42.759 Deallocate: Supported 00:07:42.759 Deallocated/Unwritten Error: Supported 00:07:42.759 Deallocated Read Value: All 0x00 00:07:42.759 Deallocate in Write Zeroes: Not Supported 00:07:42.759 Deallocated Guard Field: 0xFFFF 00:07:42.759 Flush: Supported 00:07:42.759 Reservation: Not Supported 00:07:42.759 Namespace Sharing Capabilities: Private 00:07:42.759 Size (in LBAs): 1048576 (4GiB) 00:07:42.759 Capacity (in LBAs): 1048576 (4GiB) 00:07:42.759 Utilization (in LBAs): 1048576 (4GiB) 00:07:42.759 Thin Provisioning: Not Supported 00:07:42.759 Per-NS Atomic Units: No 00:07:42.759 Maximum Single Source Range Length: 128 00:07:42.759 Maximum Copy Length: 128 00:07:42.759 Maximum Source Range Count: 128 00:07:42.759 NGUID/EUI64 Never Reused: No 00:07:42.759 Namespace Write Protected: No 00:07:42.759 Number of LBA Formats: 8 00:07:42.759 Current LBA Format: LBA Format #04 00:07:42.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:42.759 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:42.759 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:42.759 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:42.759 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:42.759 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:42.759 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:42.759 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:42.759 00:07:42.759 NVM Specific Namespace Data 00:07:42.759 =========================== 00:07:42.759 Logical Block Storage Tag Mask: 0 00:07:42.759 Protection Information Capabilities: 00:07:42.759 16b Guard Protection Information Storage Tag Support: No 00:07:42.759 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:42.759 Storage Tag Check Read Support: No 00:07:42.759 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.759 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:42.760 12:33:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:42.760 12:33:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:43.056 ===================================================== 00:07:43.056 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:43.056 ===================================================== 00:07:43.056 Controller Capabilities/Features 00:07:43.056 ================================ 00:07:43.056 Vendor ID: 1b36 00:07:43.056 Subsystem Vendor ID: 1af4 00:07:43.056 Serial Number: 12340 00:07:43.056 Model Number: QEMU NVMe Ctrl 00:07:43.056 Firmware Version: 8.0.0 00:07:43.056 Recommended Arb Burst: 6 00:07:43.056 IEEE OUI Identifier: 00 54 52 00:07:43.056 Multi-path I/O 00:07:43.056 May have multiple subsystem ports: No 00:07:43.056 May have multiple controllers: No 00:07:43.056 Associated with SR-IOV VF: No 00:07:43.056 Max Data Transfer Size: 524288 00:07:43.056 Max Number of Namespaces: 256 00:07:43.056 Max Number of I/O Queues: 64 00:07:43.056 NVMe Specification Version (VS): 1.4 00:07:43.056 NVMe Specification Version (Identify): 1.4 00:07:43.056 Maximum Queue Entries: 2048 00:07:43.056 Contiguous Queues Required: Yes 00:07:43.056 Arbitration Mechanisms Supported 00:07:43.056 Weighted Round Robin: Not Supported 00:07:43.056 Vendor Specific: Not Supported 00:07:43.056 Reset Timeout: 7500 ms 00:07:43.056 Doorbell Stride: 4 bytes 00:07:43.056 NVM Subsystem Reset: Not Supported 00:07:43.056 Command Sets Supported 00:07:43.056 NVM Command Set: Supported 00:07:43.056 Boot Partition: Not Supported 00:07:43.056 Memory Page Size Minimum: 4096 bytes 00:07:43.056 Memory Page Size Maximum: 65536 bytes 00:07:43.056 Persistent Memory Region: Not Supported 00:07:43.056 Optional Asynchronous Events Supported 00:07:43.056 Namespace Attribute Notices: Supported 00:07:43.056 Firmware Activation Notices: Not Supported 00:07:43.056 ANA Change Notices: Not Supported 00:07:43.056 PLE Aggregate Log Change Notices: Not Supported 00:07:43.056 LBA Status Info Alert Notices: Not Supported 00:07:43.056 EGE Aggregate Log Change Notices: Not Supported 00:07:43.056 Normal NVM Subsystem Shutdown event: Not Supported 00:07:43.056 Zone Descriptor Change Notices: Not Supported 00:07:43.056 Discovery Log Change Notices: Not Supported 00:07:43.056 Controller Attributes 00:07:43.056 128-bit Host Identifier: Not Supported 00:07:43.056 Non-Operational Permissive Mode: Not Supported 00:07:43.056 NVM Sets: Not Supported 00:07:43.056 Read Recovery Levels: Not Supported 00:07:43.056 Endurance Groups: Not Supported 00:07:43.056 Predictable Latency Mode: Not Supported 00:07:43.056 Traffic Based Keep ALive: Not Supported 00:07:43.057 Namespace Granularity: Not Supported 00:07:43.057 SQ Associations: Not Supported 00:07:43.057 UUID List: Not Supported 00:07:43.057 Multi-Domain Subsystem: Not Supported 00:07:43.057 Fixed Capacity Management: Not Supported 00:07:43.057 Variable Capacity Management: Not Supported 00:07:43.057 Delete Endurance Group: Not Supported 00:07:43.057 Delete NVM Set: Not Supported 00:07:43.057 Extended LBA Formats Supported: Supported 00:07:43.057 Flexible Data Placement Supported: Not Supported 00:07:43.057 00:07:43.057 Controller Memory Buffer Support 00:07:43.057 ================================ 00:07:43.057 Supported: No 00:07:43.057 00:07:43.057 Persistent Memory Region Support 00:07:43.057 ================================ 00:07:43.057 Supported: No 00:07:43.057 00:07:43.057 Admin Command Set Attributes 00:07:43.057 ============================ 00:07:43.057 Security Send/Receive: Not Supported 00:07:43.057 Format NVM: Supported 00:07:43.057 Firmware Activate/Download: Not Supported 00:07:43.057 Namespace Management: Supported 00:07:43.057 Device Self-Test: Not Supported 00:07:43.057 Directives: Supported 00:07:43.057 NVMe-MI: Not Supported 00:07:43.057 Virtualization Management: Not Supported 00:07:43.057 Doorbell Buffer Config: Supported 00:07:43.057 Get LBA Status Capability: Not Supported 00:07:43.057 Command & Feature Lockdown Capability: Not Supported 00:07:43.057 Abort Command Limit: 4 00:07:43.057 Async Event Request Limit: 4 00:07:43.057 Number of Firmware Slots: N/A 00:07:43.057 Firmware Slot 1 Read-Only: N/A 00:07:43.057 Firmware Activation Without Reset: N/A 00:07:43.057 Multiple Update Detection Support: N/A 00:07:43.057 Firmware Update Granularity: No Information Provided 00:07:43.057 Per-Namespace SMART Log: Yes 00:07:43.057 Asymmetric Namespace Access Log Page: Not Supported 00:07:43.057 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:43.057 Command Effects Log Page: Supported 00:07:43.057 Get Log Page Extended Data: Supported 00:07:43.057 Telemetry Log Pages: Not Supported 00:07:43.057 Persistent Event Log Pages: Not Supported 00:07:43.057 Supported Log Pages Log Page: May Support 00:07:43.057 Commands Supported & Effects Log Page: Not Supported 00:07:43.057 Feature Identifiers & Effects Log Page:May Support 00:07:43.057 NVMe-MI Commands & Effects Log Page: May Support 00:07:43.057 Data Area 4 for Telemetry Log: Not Supported 00:07:43.057 Error Log Page Entries Supported: 1 00:07:43.057 Keep Alive: Not Supported 00:07:43.057 00:07:43.057 NVM Command Set Attributes 00:07:43.057 ========================== 00:07:43.057 Submission Queue Entry Size 00:07:43.057 Max: 64 00:07:43.057 Min: 64 00:07:43.057 Completion Queue Entry Size 00:07:43.057 Max: 16 00:07:43.057 Min: 16 00:07:43.057 Number of Namespaces: 256 00:07:43.057 Compare Command: Supported 00:07:43.057 Write Uncorrectable Command: Not Supported 00:07:43.057 Dataset Management Command: Supported 00:07:43.057 Write Zeroes Command: Supported 00:07:43.057 Set Features Save Field: Supported 00:07:43.057 Reservations: Not Supported 00:07:43.057 Timestamp: Supported 00:07:43.057 Copy: Supported 00:07:43.057 Volatile Write Cache: Present 00:07:43.057 Atomic Write Unit (Normal): 1 00:07:43.057 Atomic Write Unit (PFail): 1 00:07:43.057 Atomic Compare & Write Unit: 1 00:07:43.057 Fused Compare & Write: Not Supported 00:07:43.057 Scatter-Gather List 00:07:43.057 SGL Command Set: Supported 00:07:43.057 SGL Keyed: Not Supported 00:07:43.057 SGL Bit Bucket Descriptor: Not Supported 00:07:43.057 SGL Metadata Pointer: Not Supported 00:07:43.057 Oversized SGL: Not Supported 00:07:43.057 SGL Metadata Address: Not Supported 00:07:43.057 SGL Offset: Not Supported 00:07:43.057 Transport SGL Data Block: Not Supported 00:07:43.057 Replay Protected Memory Block: Not Supported 00:07:43.057 00:07:43.057 Firmware Slot Information 00:07:43.057 ========================= 00:07:43.057 Active slot: 1 00:07:43.057 Slot 1 Firmware Revision: 1.0 00:07:43.057 00:07:43.057 00:07:43.057 Commands Supported and Effects 00:07:43.057 ============================== 00:07:43.057 Admin Commands 00:07:43.057 -------------- 00:07:43.057 Delete I/O Submission Queue (00h): Supported 00:07:43.057 Create I/O Submission Queue (01h): Supported 00:07:43.057 Get Log Page (02h): Supported 00:07:43.057 Delete I/O Completion Queue (04h): Supported 00:07:43.057 Create I/O Completion Queue (05h): Supported 00:07:43.057 Identify (06h): Supported 00:07:43.057 Abort (08h): Supported 00:07:43.057 Set Features (09h): Supported 00:07:43.057 Get Features (0Ah): Supported 00:07:43.057 Asynchronous Event Request (0Ch): Supported 00:07:43.057 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:43.057 Directive Send (19h): Supported 00:07:43.057 Directive Receive (1Ah): Supported 00:07:43.057 Virtualization Management (1Ch): Supported 00:07:43.057 Doorbell Buffer Config (7Ch): Supported 00:07:43.057 Format NVM (80h): Supported LBA-Change 00:07:43.057 I/O Commands 00:07:43.057 ------------ 00:07:43.057 Flush (00h): Supported LBA-Change 00:07:43.057 Write (01h): Supported LBA-Change 00:07:43.057 Read (02h): Supported 00:07:43.057 Compare (05h): Supported 00:07:43.057 Write Zeroes (08h): Supported LBA-Change 00:07:43.057 Dataset Management (09h): Supported LBA-Change 00:07:43.057 Unknown (0Ch): Supported 00:07:43.057 Unknown (12h): Supported 00:07:43.057 Copy (19h): Supported LBA-Change 00:07:43.057 Unknown (1Dh): Supported LBA-Change 00:07:43.057 00:07:43.057 Error Log 00:07:43.057 ========= 00:07:43.057 00:07:43.057 Arbitration 00:07:43.057 =========== 00:07:43.057 Arbitration Burst: no limit 00:07:43.057 00:07:43.057 Power Management 00:07:43.057 ================ 00:07:43.057 Number of Power States: 1 00:07:43.057 Current Power State: Power State #0 00:07:43.057 Power State #0: 00:07:43.057 Max Power: 25.00 W 00:07:43.057 Non-Operational State: Operational 00:07:43.057 Entry Latency: 16 microseconds 00:07:43.057 Exit Latency: 4 microseconds 00:07:43.057 Relative Read Throughput: 0 00:07:43.057 Relative Read Latency: 0 00:07:43.057 Relative Write Throughput: 0 00:07:43.057 Relative Write Latency: 0 00:07:43.057 Idle Power: Not Reported 00:07:43.057 Active Power: Not Reported 00:07:43.057 Non-Operational Permissive Mode: Not Supported 00:07:43.057 00:07:43.057 Health Information 00:07:43.057 ================== 00:07:43.057 Critical Warnings: 00:07:43.057 Available Spare Space: OK 00:07:43.057 Temperature: OK 00:07:43.057 Device Reliability: OK 00:07:43.057 Read Only: No 00:07:43.057 Volatile Memory Backup: OK 00:07:43.057 Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.057 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:43.057 Available Spare: 0% 00:07:43.057 Available Spare Threshold: 0% 00:07:43.057 Life Percentage Used: 0% 00:07:43.057 Data Units Read: 705 00:07:43.057 Data Units Written: 633 00:07:43.057 Host Read Commands: 38707 00:07:43.057 Host Write Commands: 38493 00:07:43.057 Controller Busy Time: 0 minutes 00:07:43.057 Power Cycles: 0 00:07:43.057 Power On Hours: 0 hours 00:07:43.057 Unsafe Shutdowns: 0 00:07:43.057 Unrecoverable Media Errors: 0 00:07:43.057 Lifetime Error Log Entries: 0 00:07:43.057 Warning Temperature Time: 0 minutes 00:07:43.057 Critical Temperature Time: 0 minutes 00:07:43.057 00:07:43.057 Number of Queues 00:07:43.057 ================ 00:07:43.057 Number of I/O Submission Queues: 64 00:07:43.057 Number of I/O Completion Queues: 64 00:07:43.057 00:07:43.057 ZNS Specific Controller Data 00:07:43.057 ============================ 00:07:43.057 Zone Append Size Limit: 0 00:07:43.057 00:07:43.057 00:07:43.057 Active Namespaces 00:07:43.057 ================= 00:07:43.057 Namespace ID:1 00:07:43.057 Error Recovery Timeout: Unlimited 00:07:43.057 Command Set Identifier: NVM (00h) 00:07:43.057 Deallocate: Supported 00:07:43.057 Deallocated/Unwritten Error: Supported 00:07:43.057 Deallocated Read Value: All 0x00 00:07:43.057 Deallocate in Write Zeroes: Not Supported 00:07:43.057 Deallocated Guard Field: 0xFFFF 00:07:43.057 Flush: Supported 00:07:43.057 Reservation: Not Supported 00:07:43.057 Metadata Transferred as: Separate Metadata Buffer 00:07:43.057 Namespace Sharing Capabilities: Private 00:07:43.057 Size (in LBAs): 1548666 (5GiB) 00:07:43.057 Capacity (in LBAs): 1548666 (5GiB) 00:07:43.057 Utilization (in LBAs): 1548666 (5GiB) 00:07:43.057 Thin Provisioning: Not Supported 00:07:43.057 Per-NS Atomic Units: No 00:07:43.057 Maximum Single Source Range Length: 128 00:07:43.057 Maximum Copy Length: 128 00:07:43.057 Maximum Source Range Count: 128 00:07:43.057 NGUID/EUI64 Never Reused: No 00:07:43.058 Namespace Write Protected: No 00:07:43.058 Number of LBA Formats: 8 00:07:43.058 Current LBA Format: LBA Format #07 00:07:43.058 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.058 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.058 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.058 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.058 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.058 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.058 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.058 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.058 00:07:43.058 NVM Specific Namespace Data 00:07:43.058 =========================== 00:07:43.058 Logical Block Storage Tag Mask: 0 00:07:43.058 Protection Information Capabilities: 00:07:43.058 16b Guard Protection Information Storage Tag Support: No 00:07:43.058 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.058 Storage Tag Check Read Support: No 00:07:43.058 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.058 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.058 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.058 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.058 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.058 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.058 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.058 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.058 12:33:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:43.058 12:33:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:43.320 ===================================================== 00:07:43.320 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:43.320 ===================================================== 00:07:43.320 Controller Capabilities/Features 00:07:43.320 ================================ 00:07:43.320 Vendor ID: 1b36 00:07:43.320 Subsystem Vendor ID: 1af4 00:07:43.320 Serial Number: 12341 00:07:43.320 Model Number: QEMU NVMe Ctrl 00:07:43.320 Firmware Version: 8.0.0 00:07:43.320 Recommended Arb Burst: 6 00:07:43.320 IEEE OUI Identifier: 00 54 52 00:07:43.320 Multi-path I/O 00:07:43.320 May have multiple subsystem ports: No 00:07:43.320 May have multiple controllers: No 00:07:43.320 Associated with SR-IOV VF: No 00:07:43.320 Max Data Transfer Size: 524288 00:07:43.320 Max Number of Namespaces: 256 00:07:43.320 Max Number of I/O Queues: 64 00:07:43.320 NVMe Specification Version (VS): 1.4 00:07:43.320 NVMe Specification Version (Identify): 1.4 00:07:43.320 Maximum Queue Entries: 2048 00:07:43.320 Contiguous Queues Required: Yes 00:07:43.320 Arbitration Mechanisms Supported 00:07:43.320 Weighted Round Robin: Not Supported 00:07:43.320 Vendor Specific: Not Supported 00:07:43.320 Reset Timeout: 7500 ms 00:07:43.320 Doorbell Stride: 4 bytes 00:07:43.320 NVM Subsystem Reset: Not Supported 00:07:43.320 Command Sets Supported 00:07:43.320 NVM Command Set: Supported 00:07:43.320 Boot Partition: Not Supported 00:07:43.320 Memory Page Size Minimum: 4096 bytes 00:07:43.320 Memory Page Size Maximum: 65536 bytes 00:07:43.320 Persistent Memory Region: Not Supported 00:07:43.320 Optional Asynchronous Events Supported 00:07:43.320 Namespace Attribute Notices: Supported 00:07:43.320 Firmware Activation Notices: Not Supported 00:07:43.320 ANA Change Notices: Not Supported 00:07:43.320 PLE Aggregate Log Change Notices: Not Supported 00:07:43.320 LBA Status Info Alert Notices: Not Supported 00:07:43.320 EGE Aggregate Log Change Notices: Not Supported 00:07:43.320 Normal NVM Subsystem Shutdown event: Not Supported 00:07:43.320 Zone Descriptor Change Notices: Not Supported 00:07:43.320 Discovery Log Change Notices: Not Supported 00:07:43.320 Controller Attributes 00:07:43.320 128-bit Host Identifier: Not Supported 00:07:43.320 Non-Operational Permissive Mode: Not Supported 00:07:43.320 NVM Sets: Not Supported 00:07:43.320 Read Recovery Levels: Not Supported 00:07:43.320 Endurance Groups: Not Supported 00:07:43.320 Predictable Latency Mode: Not Supported 00:07:43.320 Traffic Based Keep ALive: Not Supported 00:07:43.320 Namespace Granularity: Not Supported 00:07:43.320 SQ Associations: Not Supported 00:07:43.320 UUID List: Not Supported 00:07:43.320 Multi-Domain Subsystem: Not Supported 00:07:43.320 Fixed Capacity Management: Not Supported 00:07:43.320 Variable Capacity Management: Not Supported 00:07:43.320 Delete Endurance Group: Not Supported 00:07:43.320 Delete NVM Set: Not Supported 00:07:43.320 Extended LBA Formats Supported: Supported 00:07:43.320 Flexible Data Placement Supported: Not Supported 00:07:43.320 00:07:43.320 Controller Memory Buffer Support 00:07:43.320 ================================ 00:07:43.320 Supported: No 00:07:43.320 00:07:43.320 Persistent Memory Region Support 00:07:43.320 ================================ 00:07:43.320 Supported: No 00:07:43.320 00:07:43.320 Admin Command Set Attributes 00:07:43.320 ============================ 00:07:43.320 Security Send/Receive: Not Supported 00:07:43.320 Format NVM: Supported 00:07:43.320 Firmware Activate/Download: Not Supported 00:07:43.320 Namespace Management: Supported 00:07:43.320 Device Self-Test: Not Supported 00:07:43.320 Directives: Supported 00:07:43.320 NVMe-MI: Not Supported 00:07:43.320 Virtualization Management: Not Supported 00:07:43.320 Doorbell Buffer Config: Supported 00:07:43.320 Get LBA Status Capability: Not Supported 00:07:43.320 Command & Feature Lockdown Capability: Not Supported 00:07:43.320 Abort Command Limit: 4 00:07:43.320 Async Event Request Limit: 4 00:07:43.320 Number of Firmware Slots: N/A 00:07:43.320 Firmware Slot 1 Read-Only: N/A 00:07:43.320 Firmware Activation Without Reset: N/A 00:07:43.320 Multiple Update Detection Support: N/A 00:07:43.320 Firmware Update Granularity: No Information Provided 00:07:43.320 Per-Namespace SMART Log: Yes 00:07:43.320 Asymmetric Namespace Access Log Page: Not Supported 00:07:43.320 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:43.320 Command Effects Log Page: Supported 00:07:43.320 Get Log Page Extended Data: Supported 00:07:43.320 Telemetry Log Pages: Not Supported 00:07:43.320 Persistent Event Log Pages: Not Supported 00:07:43.320 Supported Log Pages Log Page: May Support 00:07:43.320 Commands Supported & Effects Log Page: Not Supported 00:07:43.320 Feature Identifiers & Effects Log Page:May Support 00:07:43.320 NVMe-MI Commands & Effects Log Page: May Support 00:07:43.320 Data Area 4 for Telemetry Log: Not Supported 00:07:43.320 Error Log Page Entries Supported: 1 00:07:43.320 Keep Alive: Not Supported 00:07:43.320 00:07:43.320 NVM Command Set Attributes 00:07:43.320 ========================== 00:07:43.320 Submission Queue Entry Size 00:07:43.320 Max: 64 00:07:43.320 Min: 64 00:07:43.320 Completion Queue Entry Size 00:07:43.320 Max: 16 00:07:43.320 Min: 16 00:07:43.320 Number of Namespaces: 256 00:07:43.320 Compare Command: Supported 00:07:43.320 Write Uncorrectable Command: Not Supported 00:07:43.320 Dataset Management Command: Supported 00:07:43.320 Write Zeroes Command: Supported 00:07:43.320 Set Features Save Field: Supported 00:07:43.320 Reservations: Not Supported 00:07:43.320 Timestamp: Supported 00:07:43.320 Copy: Supported 00:07:43.320 Volatile Write Cache: Present 00:07:43.320 Atomic Write Unit (Normal): 1 00:07:43.320 Atomic Write Unit (PFail): 1 00:07:43.320 Atomic Compare & Write Unit: 1 00:07:43.320 Fused Compare & Write: Not Supported 00:07:43.320 Scatter-Gather List 00:07:43.320 SGL Command Set: Supported 00:07:43.320 SGL Keyed: Not Supported 00:07:43.320 SGL Bit Bucket Descriptor: Not Supported 00:07:43.320 SGL Metadata Pointer: Not Supported 00:07:43.320 Oversized SGL: Not Supported 00:07:43.320 SGL Metadata Address: Not Supported 00:07:43.320 SGL Offset: Not Supported 00:07:43.320 Transport SGL Data Block: Not Supported 00:07:43.320 Replay Protected Memory Block: Not Supported 00:07:43.320 00:07:43.320 Firmware Slot Information 00:07:43.320 ========================= 00:07:43.320 Active slot: 1 00:07:43.320 Slot 1 Firmware Revision: 1.0 00:07:43.320 00:07:43.320 00:07:43.320 Commands Supported and Effects 00:07:43.320 ============================== 00:07:43.320 Admin Commands 00:07:43.320 -------------- 00:07:43.320 Delete I/O Submission Queue (00h): Supported 00:07:43.320 Create I/O Submission Queue (01h): Supported 00:07:43.320 Get Log Page (02h): Supported 00:07:43.320 Delete I/O Completion Queue (04h): Supported 00:07:43.320 Create I/O Completion Queue (05h): Supported 00:07:43.320 Identify (06h): Supported 00:07:43.320 Abort (08h): Supported 00:07:43.320 Set Features (09h): Supported 00:07:43.320 Get Features (0Ah): Supported 00:07:43.320 Asynchronous Event Request (0Ch): Supported 00:07:43.320 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:43.320 Directive Send (19h): Supported 00:07:43.320 Directive Receive (1Ah): Supported 00:07:43.320 Virtualization Management (1Ch): Supported 00:07:43.320 Doorbell Buffer Config (7Ch): Supported 00:07:43.320 Format NVM (80h): Supported LBA-Change 00:07:43.320 I/O Commands 00:07:43.320 ------------ 00:07:43.320 Flush (00h): Supported LBA-Change 00:07:43.320 Write (01h): Supported LBA-Change 00:07:43.320 Read (02h): Supported 00:07:43.320 Compare (05h): Supported 00:07:43.320 Write Zeroes (08h): Supported LBA-Change 00:07:43.320 Dataset Management (09h): Supported LBA-Change 00:07:43.320 Unknown (0Ch): Supported 00:07:43.320 Unknown (12h): Supported 00:07:43.320 Copy (19h): Supported LBA-Change 00:07:43.320 Unknown (1Dh): Supported LBA-Change 00:07:43.320 00:07:43.320 Error Log 00:07:43.320 ========= 00:07:43.320 00:07:43.320 Arbitration 00:07:43.320 =========== 00:07:43.320 Arbitration Burst: no limit 00:07:43.320 00:07:43.320 Power Management 00:07:43.320 ================ 00:07:43.320 Number of Power States: 1 00:07:43.320 Current Power State: Power State #0 00:07:43.320 Power State #0: 00:07:43.320 Max Power: 25.00 W 00:07:43.320 Non-Operational State: Operational 00:07:43.321 Entry Latency: 16 microseconds 00:07:43.321 Exit Latency: 4 microseconds 00:07:43.321 Relative Read Throughput: 0 00:07:43.321 Relative Read Latency: 0 00:07:43.321 Relative Write Throughput: 0 00:07:43.321 Relative Write Latency: 0 00:07:43.321 Idle Power: Not Reported 00:07:43.321 Active Power: Not Reported 00:07:43.321 Non-Operational Permissive Mode: Not Supported 00:07:43.321 00:07:43.321 Health Information 00:07:43.321 ================== 00:07:43.321 Critical Warnings: 00:07:43.321 Available Spare Space: OK 00:07:43.321 Temperature: OK 00:07:43.321 Device Reliability: OK 00:07:43.321 Read Only: No 00:07:43.321 Volatile Memory Backup: OK 00:07:43.321 Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.321 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:43.321 Available Spare: 0% 00:07:43.321 Available Spare Threshold: 0% 00:07:43.321 Life Percentage Used: 0% 00:07:43.321 Data Units Read: 1078 00:07:43.321 Data Units Written: 948 00:07:43.321 Host Read Commands: 56884 00:07:43.321 Host Write Commands: 55719 00:07:43.321 Controller Busy Time: 0 minutes 00:07:43.321 Power Cycles: 0 00:07:43.321 Power On Hours: 0 hours 00:07:43.321 Unsafe Shutdowns: 0 00:07:43.321 Unrecoverable Media Errors: 0 00:07:43.321 Lifetime Error Log Entries: 0 00:07:43.321 Warning Temperature Time: 0 minutes 00:07:43.321 Critical Temperature Time: 0 minutes 00:07:43.321 00:07:43.321 Number of Queues 00:07:43.321 ================ 00:07:43.321 Number of I/O Submission Queues: 64 00:07:43.321 Number of I/O Completion Queues: 64 00:07:43.321 00:07:43.321 ZNS Specific Controller Data 00:07:43.321 ============================ 00:07:43.321 Zone Append Size Limit: 0 00:07:43.321 00:07:43.321 00:07:43.321 Active Namespaces 00:07:43.321 ================= 00:07:43.321 Namespace ID:1 00:07:43.321 Error Recovery Timeout: Unlimited 00:07:43.321 Command Set Identifier: NVM (00h) 00:07:43.321 Deallocate: Supported 00:07:43.321 Deallocated/Unwritten Error: Supported 00:07:43.321 Deallocated Read Value: All 0x00 00:07:43.321 Deallocate in Write Zeroes: Not Supported 00:07:43.321 Deallocated Guard Field: 0xFFFF 00:07:43.321 Flush: Supported 00:07:43.321 Reservation: Not Supported 00:07:43.321 Namespace Sharing Capabilities: Private 00:07:43.321 Size (in LBAs): 1310720 (5GiB) 00:07:43.321 Capacity (in LBAs): 1310720 (5GiB) 00:07:43.321 Utilization (in LBAs): 1310720 (5GiB) 00:07:43.321 Thin Provisioning: Not Supported 00:07:43.321 Per-NS Atomic Units: No 00:07:43.321 Maximum Single Source Range Length: 128 00:07:43.321 Maximum Copy Length: 128 00:07:43.321 Maximum Source Range Count: 128 00:07:43.321 NGUID/EUI64 Never Reused: No 00:07:43.321 Namespace Write Protected: No 00:07:43.321 Number of LBA Formats: 8 00:07:43.321 Current LBA Format: LBA Format #04 00:07:43.321 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.321 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.321 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.321 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.321 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.321 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.321 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.321 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.321 00:07:43.321 NVM Specific Namespace Data 00:07:43.321 =========================== 00:07:43.321 Logical Block Storage Tag Mask: 0 00:07:43.321 Protection Information Capabilities: 00:07:43.321 16b Guard Protection Information Storage Tag Support: No 00:07:43.321 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.321 Storage Tag Check Read Support: No 00:07:43.321 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.321 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.321 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.321 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.321 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.321 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.321 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.321 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.321 12:33:42 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:43.321 12:33:42 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:43.584 ===================================================== 00:07:43.584 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:43.584 ===================================================== 00:07:43.584 Controller Capabilities/Features 00:07:43.584 ================================ 00:07:43.584 Vendor ID: 1b36 00:07:43.584 Subsystem Vendor ID: 1af4 00:07:43.584 Serial Number: 12342 00:07:43.584 Model Number: QEMU NVMe Ctrl 00:07:43.584 Firmware Version: 8.0.0 00:07:43.584 Recommended Arb Burst: 6 00:07:43.584 IEEE OUI Identifier: 00 54 52 00:07:43.584 Multi-path I/O 00:07:43.584 May have multiple subsystem ports: No 00:07:43.584 May have multiple controllers: No 00:07:43.584 Associated with SR-IOV VF: No 00:07:43.584 Max Data Transfer Size: 524288 00:07:43.584 Max Number of Namespaces: 256 00:07:43.584 Max Number of I/O Queues: 64 00:07:43.584 NVMe Specification Version (VS): 1.4 00:07:43.584 NVMe Specification Version (Identify): 1.4 00:07:43.584 Maximum Queue Entries: 2048 00:07:43.584 Contiguous Queues Required: Yes 00:07:43.584 Arbitration Mechanisms Supported 00:07:43.584 Weighted Round Robin: Not Supported 00:07:43.584 Vendor Specific: Not Supported 00:07:43.584 Reset Timeout: 7500 ms 00:07:43.584 Doorbell Stride: 4 bytes 00:07:43.584 NVM Subsystem Reset: Not Supported 00:07:43.584 Command Sets Supported 00:07:43.584 NVM Command Set: Supported 00:07:43.584 Boot Partition: Not Supported 00:07:43.584 Memory Page Size Minimum: 4096 bytes 00:07:43.584 Memory Page Size Maximum: 65536 bytes 00:07:43.584 Persistent Memory Region: Not Supported 00:07:43.584 Optional Asynchronous Events Supported 00:07:43.584 Namespace Attribute Notices: Supported 00:07:43.584 Firmware Activation Notices: Not Supported 00:07:43.584 ANA Change Notices: Not Supported 00:07:43.584 PLE Aggregate Log Change Notices: Not Supported 00:07:43.584 LBA Status Info Alert Notices: Not Supported 00:07:43.584 EGE Aggregate Log Change Notices: Not Supported 00:07:43.584 Normal NVM Subsystem Shutdown event: Not Supported 00:07:43.584 Zone Descriptor Change Notices: Not Supported 00:07:43.584 Discovery Log Change Notices: Not Supported 00:07:43.584 Controller Attributes 00:07:43.584 128-bit Host Identifier: Not Supported 00:07:43.584 Non-Operational Permissive Mode: Not Supported 00:07:43.584 NVM Sets: Not Supported 00:07:43.584 Read Recovery Levels: Not Supported 00:07:43.584 Endurance Groups: Not Supported 00:07:43.584 Predictable Latency Mode: Not Supported 00:07:43.584 Traffic Based Keep ALive: Not Supported 00:07:43.584 Namespace Granularity: Not Supported 00:07:43.584 SQ Associations: Not Supported 00:07:43.584 UUID List: Not Supported 00:07:43.584 Multi-Domain Subsystem: Not Supported 00:07:43.584 Fixed Capacity Management: Not Supported 00:07:43.584 Variable Capacity Management: Not Supported 00:07:43.584 Delete Endurance Group: Not Supported 00:07:43.584 Delete NVM Set: Not Supported 00:07:43.584 Extended LBA Formats Supported: Supported 00:07:43.584 Flexible Data Placement Supported: Not Supported 00:07:43.584 00:07:43.584 Controller Memory Buffer Support 00:07:43.584 ================================ 00:07:43.584 Supported: No 00:07:43.584 00:07:43.584 Persistent Memory Region Support 00:07:43.584 ================================ 00:07:43.584 Supported: No 00:07:43.584 00:07:43.584 Admin Command Set Attributes 00:07:43.584 ============================ 00:07:43.584 Security Send/Receive: Not Supported 00:07:43.584 Format NVM: Supported 00:07:43.584 Firmware Activate/Download: Not Supported 00:07:43.584 Namespace Management: Supported 00:07:43.584 Device Self-Test: Not Supported 00:07:43.584 Directives: Supported 00:07:43.584 NVMe-MI: Not Supported 00:07:43.584 Virtualization Management: Not Supported 00:07:43.584 Doorbell Buffer Config: Supported 00:07:43.584 Get LBA Status Capability: Not Supported 00:07:43.584 Command & Feature Lockdown Capability: Not Supported 00:07:43.584 Abort Command Limit: 4 00:07:43.584 Async Event Request Limit: 4 00:07:43.584 Number of Firmware Slots: N/A 00:07:43.584 Firmware Slot 1 Read-Only: N/A 00:07:43.584 Firmware Activation Without Reset: N/A 00:07:43.584 Multiple Update Detection Support: N/A 00:07:43.584 Firmware Update Granularity: No Information Provided 00:07:43.584 Per-Namespace SMART Log: Yes 00:07:43.584 Asymmetric Namespace Access Log Page: Not Supported 00:07:43.584 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:43.584 Command Effects Log Page: Supported 00:07:43.584 Get Log Page Extended Data: Supported 00:07:43.584 Telemetry Log Pages: Not Supported 00:07:43.584 Persistent Event Log Pages: Not Supported 00:07:43.584 Supported Log Pages Log Page: May Support 00:07:43.584 Commands Supported & Effects Log Page: Not Supported 00:07:43.584 Feature Identifiers & Effects Log Page:May Support 00:07:43.584 NVMe-MI Commands & Effects Log Page: May Support 00:07:43.584 Data Area 4 for Telemetry Log: Not Supported 00:07:43.584 Error Log Page Entries Supported: 1 00:07:43.584 Keep Alive: Not Supported 00:07:43.584 00:07:43.584 NVM Command Set Attributes 00:07:43.584 ========================== 00:07:43.584 Submission Queue Entry Size 00:07:43.584 Max: 64 00:07:43.584 Min: 64 00:07:43.584 Completion Queue Entry Size 00:07:43.584 Max: 16 00:07:43.584 Min: 16 00:07:43.584 Number of Namespaces: 256 00:07:43.584 Compare Command: Supported 00:07:43.584 Write Uncorrectable Command: Not Supported 00:07:43.584 Dataset Management Command: Supported 00:07:43.584 Write Zeroes Command: Supported 00:07:43.584 Set Features Save Field: Supported 00:07:43.584 Reservations: Not Supported 00:07:43.584 Timestamp: Supported 00:07:43.584 Copy: Supported 00:07:43.584 Volatile Write Cache: Present 00:07:43.584 Atomic Write Unit (Normal): 1 00:07:43.584 Atomic Write Unit (PFail): 1 00:07:43.584 Atomic Compare & Write Unit: 1 00:07:43.584 Fused Compare & Write: Not Supported 00:07:43.584 Scatter-Gather List 00:07:43.584 SGL Command Set: Supported 00:07:43.584 SGL Keyed: Not Supported 00:07:43.584 SGL Bit Bucket Descriptor: Not Supported 00:07:43.584 SGL Metadata Pointer: Not Supported 00:07:43.584 Oversized SGL: Not Supported 00:07:43.584 SGL Metadata Address: Not Supported 00:07:43.584 SGL Offset: Not Supported 00:07:43.584 Transport SGL Data Block: Not Supported 00:07:43.584 Replay Protected Memory Block: Not Supported 00:07:43.584 00:07:43.584 Firmware Slot Information 00:07:43.584 ========================= 00:07:43.584 Active slot: 1 00:07:43.584 Slot 1 Firmware Revision: 1.0 00:07:43.584 00:07:43.584 00:07:43.584 Commands Supported and Effects 00:07:43.584 ============================== 00:07:43.584 Admin Commands 00:07:43.584 -------------- 00:07:43.584 Delete I/O Submission Queue (00h): Supported 00:07:43.584 Create I/O Submission Queue (01h): Supported 00:07:43.584 Get Log Page (02h): Supported 00:07:43.584 Delete I/O Completion Queue (04h): Supported 00:07:43.584 Create I/O Completion Queue (05h): Supported 00:07:43.584 Identify (06h): Supported 00:07:43.584 Abort (08h): Supported 00:07:43.584 Set Features (09h): Supported 00:07:43.584 Get Features (0Ah): Supported 00:07:43.584 Asynchronous Event Request (0Ch): Supported 00:07:43.584 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:43.584 Directive Send (19h): Supported 00:07:43.584 Directive Receive (1Ah): Supported 00:07:43.584 Virtualization Management (1Ch): Supported 00:07:43.584 Doorbell Buffer Config (7Ch): Supported 00:07:43.584 Format NVM (80h): Supported LBA-Change 00:07:43.584 I/O Commands 00:07:43.584 ------------ 00:07:43.584 Flush (00h): Supported LBA-Change 00:07:43.584 Write (01h): Supported LBA-Change 00:07:43.584 Read (02h): Supported 00:07:43.584 Compare (05h): Supported 00:07:43.584 Write Zeroes (08h): Supported LBA-Change 00:07:43.584 Dataset Management (09h): Supported LBA-Change 00:07:43.584 Unknown (0Ch): Supported 00:07:43.584 Unknown (12h): Supported 00:07:43.584 Copy (19h): Supported LBA-Change 00:07:43.584 Unknown (1Dh): Supported LBA-Change 00:07:43.584 00:07:43.584 Error Log 00:07:43.584 ========= 00:07:43.584 00:07:43.584 Arbitration 00:07:43.584 =========== 00:07:43.584 Arbitration Burst: no limit 00:07:43.584 00:07:43.584 Power Management 00:07:43.584 ================ 00:07:43.584 Number of Power States: 1 00:07:43.584 Current Power State: Power State #0 00:07:43.584 Power State #0: 00:07:43.584 Max Power: 25.00 W 00:07:43.585 Non-Operational State: Operational 00:07:43.585 Entry Latency: 16 microseconds 00:07:43.585 Exit Latency: 4 microseconds 00:07:43.585 Relative Read Throughput: 0 00:07:43.585 Relative Read Latency: 0 00:07:43.585 Relative Write Throughput: 0 00:07:43.585 Relative Write Latency: 0 00:07:43.585 Idle Power: Not Reported 00:07:43.585 Active Power: Not Reported 00:07:43.585 Non-Operational Permissive Mode: Not Supported 00:07:43.585 00:07:43.585 Health Information 00:07:43.585 ================== 00:07:43.585 Critical Warnings: 00:07:43.585 Available Spare Space: OK 00:07:43.585 Temperature: OK 00:07:43.585 Device Reliability: OK 00:07:43.585 Read Only: No 00:07:43.585 Volatile Memory Backup: OK 00:07:43.585 Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.585 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:43.585 Available Spare: 0% 00:07:43.585 Available Spare Threshold: 0% 00:07:43.585 Life Percentage Used: 0% 00:07:43.585 Data Units Read: 2281 00:07:43.585 Data Units Written: 2068 00:07:43.585 Host Read Commands: 118199 00:07:43.585 Host Write Commands: 116469 00:07:43.585 Controller Busy Time: 0 minutes 00:07:43.585 Power Cycles: 0 00:07:43.585 Power On Hours: 0 hours 00:07:43.585 Unsafe Shutdowns: 0 00:07:43.585 Unrecoverable Media Errors: 0 00:07:43.585 Lifetime Error Log Entries: 0 00:07:43.585 Warning Temperature Time: 0 minutes 00:07:43.585 Critical Temperature Time: 0 minutes 00:07:43.585 00:07:43.585 Number of Queues 00:07:43.585 ================ 00:07:43.585 Number of I/O Submission Queues: 64 00:07:43.585 Number of I/O Completion Queues: 64 00:07:43.585 00:07:43.585 ZNS Specific Controller Data 00:07:43.585 ============================ 00:07:43.585 Zone Append Size Limit: 0 00:07:43.585 00:07:43.585 00:07:43.585 Active Namespaces 00:07:43.585 ================= 00:07:43.585 Namespace ID:1 00:07:43.585 Error Recovery Timeout: Unlimited 00:07:43.585 Command Set Identifier: NVM (00h) 00:07:43.585 Deallocate: Supported 00:07:43.585 Deallocated/Unwritten Error: Supported 00:07:43.585 Deallocated Read Value: All 0x00 00:07:43.585 Deallocate in Write Zeroes: Not Supported 00:07:43.585 Deallocated Guard Field: 0xFFFF 00:07:43.585 Flush: Supported 00:07:43.585 Reservation: Not Supported 00:07:43.585 Namespace Sharing Capabilities: Private 00:07:43.585 Size (in LBAs): 1048576 (4GiB) 00:07:43.585 Capacity (in LBAs): 1048576 (4GiB) 00:07:43.585 Utilization (in LBAs): 1048576 (4GiB) 00:07:43.585 Thin Provisioning: Not Supported 00:07:43.585 Per-NS Atomic Units: No 00:07:43.585 Maximum Single Source Range Length: 128 00:07:43.585 Maximum Copy Length: 128 00:07:43.585 Maximum Source Range Count: 128 00:07:43.585 NGUID/EUI64 Never Reused: No 00:07:43.585 Namespace Write Protected: No 00:07:43.585 Number of LBA Formats: 8 00:07:43.585 Current LBA Format: LBA Format #04 00:07:43.585 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.585 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.585 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.585 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.585 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.585 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.585 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.585 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.585 00:07:43.585 NVM Specific Namespace Data 00:07:43.585 =========================== 00:07:43.585 Logical Block Storage Tag Mask: 0 00:07:43.585 Protection Information Capabilities: 00:07:43.585 16b Guard Protection Information Storage Tag Support: No 00:07:43.585 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.585 Storage Tag Check Read Support: No 00:07:43.585 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Namespace ID:2 00:07:43.585 Error Recovery Timeout: Unlimited 00:07:43.585 Command Set Identifier: NVM (00h) 00:07:43.585 Deallocate: Supported 00:07:43.585 Deallocated/Unwritten Error: Supported 00:07:43.585 Deallocated Read Value: All 0x00 00:07:43.585 Deallocate in Write Zeroes: Not Supported 00:07:43.585 Deallocated Guard Field: 0xFFFF 00:07:43.585 Flush: Supported 00:07:43.585 Reservation: Not Supported 00:07:43.585 Namespace Sharing Capabilities: Private 00:07:43.585 Size (in LBAs): 1048576 (4GiB) 00:07:43.585 Capacity (in LBAs): 1048576 (4GiB) 00:07:43.585 Utilization (in LBAs): 1048576 (4GiB) 00:07:43.585 Thin Provisioning: Not Supported 00:07:43.585 Per-NS Atomic Units: No 00:07:43.585 Maximum Single Source Range Length: 128 00:07:43.585 Maximum Copy Length: 128 00:07:43.585 Maximum Source Range Count: 128 00:07:43.585 NGUID/EUI64 Never Reused: No 00:07:43.585 Namespace Write Protected: No 00:07:43.585 Number of LBA Formats: 8 00:07:43.585 Current LBA Format: LBA Format #04 00:07:43.585 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.585 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.585 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.585 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.585 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.585 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.585 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.585 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.585 00:07:43.585 NVM Specific Namespace Data 00:07:43.585 =========================== 00:07:43.585 Logical Block Storage Tag Mask: 0 00:07:43.585 Protection Information Capabilities: 00:07:43.585 16b Guard Protection Information Storage Tag Support: No 00:07:43.585 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.585 Storage Tag Check Read Support: No 00:07:43.585 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Namespace ID:3 00:07:43.585 Error Recovery Timeout: Unlimited 00:07:43.585 Command Set Identifier: NVM (00h) 00:07:43.585 Deallocate: Supported 00:07:43.585 Deallocated/Unwritten Error: Supported 00:07:43.585 Deallocated Read Value: All 0x00 00:07:43.585 Deallocate in Write Zeroes: Not Supported 00:07:43.585 Deallocated Guard Field: 0xFFFF 00:07:43.585 Flush: Supported 00:07:43.585 Reservation: Not Supported 00:07:43.585 Namespace Sharing Capabilities: Private 00:07:43.585 Size (in LBAs): 1048576 (4GiB) 00:07:43.585 Capacity (in LBAs): 1048576 (4GiB) 00:07:43.585 Utilization (in LBAs): 1048576 (4GiB) 00:07:43.585 Thin Provisioning: Not Supported 00:07:43.585 Per-NS Atomic Units: No 00:07:43.585 Maximum Single Source Range Length: 128 00:07:43.585 Maximum Copy Length: 128 00:07:43.585 Maximum Source Range Count: 128 00:07:43.585 NGUID/EUI64 Never Reused: No 00:07:43.585 Namespace Write Protected: No 00:07:43.585 Number of LBA Formats: 8 00:07:43.585 Current LBA Format: LBA Format #04 00:07:43.585 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.585 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.585 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.585 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.585 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.585 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.585 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.585 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.585 00:07:43.585 NVM Specific Namespace Data 00:07:43.585 =========================== 00:07:43.585 Logical Block Storage Tag Mask: 0 00:07:43.585 Protection Information Capabilities: 00:07:43.585 16b Guard Protection Information Storage Tag Support: No 00:07:43.585 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.585 Storage Tag Check Read Support: No 00:07:43.585 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.585 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.586 12:33:43 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:43.586 12:33:43 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:43.847 ===================================================== 00:07:43.847 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:43.847 ===================================================== 00:07:43.847 Controller Capabilities/Features 00:07:43.847 ================================ 00:07:43.847 Vendor ID: 1b36 00:07:43.847 Subsystem Vendor ID: 1af4 00:07:43.847 Serial Number: 12343 00:07:43.847 Model Number: QEMU NVMe Ctrl 00:07:43.847 Firmware Version: 8.0.0 00:07:43.847 Recommended Arb Burst: 6 00:07:43.847 IEEE OUI Identifier: 00 54 52 00:07:43.847 Multi-path I/O 00:07:43.847 May have multiple subsystem ports: No 00:07:43.847 May have multiple controllers: Yes 00:07:43.847 Associated with SR-IOV VF: No 00:07:43.847 Max Data Transfer Size: 524288 00:07:43.847 Max Number of Namespaces: 256 00:07:43.847 Max Number of I/O Queues: 64 00:07:43.847 NVMe Specification Version (VS): 1.4 00:07:43.847 NVMe Specification Version (Identify): 1.4 00:07:43.847 Maximum Queue Entries: 2048 00:07:43.847 Contiguous Queues Required: Yes 00:07:43.847 Arbitration Mechanisms Supported 00:07:43.847 Weighted Round Robin: Not Supported 00:07:43.847 Vendor Specific: Not Supported 00:07:43.847 Reset Timeout: 7500 ms 00:07:43.847 Doorbell Stride: 4 bytes 00:07:43.847 NVM Subsystem Reset: Not Supported 00:07:43.847 Command Sets Supported 00:07:43.847 NVM Command Set: Supported 00:07:43.847 Boot Partition: Not Supported 00:07:43.847 Memory Page Size Minimum: 4096 bytes 00:07:43.847 Memory Page Size Maximum: 65536 bytes 00:07:43.847 Persistent Memory Region: Not Supported 00:07:43.847 Optional Asynchronous Events Supported 00:07:43.847 Namespace Attribute Notices: Supported 00:07:43.847 Firmware Activation Notices: Not Supported 00:07:43.847 ANA Change Notices: Not Supported 00:07:43.847 PLE Aggregate Log Change Notices: Not Supported 00:07:43.847 LBA Status Info Alert Notices: Not Supported 00:07:43.847 EGE Aggregate Log Change Notices: Not Supported 00:07:43.847 Normal NVM Subsystem Shutdown event: Not Supported 00:07:43.847 Zone Descriptor Change Notices: Not Supported 00:07:43.847 Discovery Log Change Notices: Not Supported 00:07:43.847 Controller Attributes 00:07:43.847 128-bit Host Identifier: Not Supported 00:07:43.847 Non-Operational Permissive Mode: Not Supported 00:07:43.847 NVM Sets: Not Supported 00:07:43.847 Read Recovery Levels: Not Supported 00:07:43.847 Endurance Groups: Supported 00:07:43.848 Predictable Latency Mode: Not Supported 00:07:43.848 Traffic Based Keep ALive: Not Supported 00:07:43.848 Namespace Granularity: Not Supported 00:07:43.848 SQ Associations: Not Supported 00:07:43.848 UUID List: Not Supported 00:07:43.848 Multi-Domain Subsystem: Not Supported 00:07:43.848 Fixed Capacity Management: Not Supported 00:07:43.848 Variable Capacity Management: Not Supported 00:07:43.848 Delete Endurance Group: Not Supported 00:07:43.848 Delete NVM Set: Not Supported 00:07:43.848 Extended LBA Formats Supported: Supported 00:07:43.848 Flexible Data Placement Supported: Supported 00:07:43.848 00:07:43.848 Controller Memory Buffer Support 00:07:43.848 ================================ 00:07:43.848 Supported: No 00:07:43.848 00:07:43.848 Persistent Memory Region Support 00:07:43.848 ================================ 00:07:43.848 Supported: No 00:07:43.848 00:07:43.848 Admin Command Set Attributes 00:07:43.848 ============================ 00:07:43.848 Security Send/Receive: Not Supported 00:07:43.848 Format NVM: Supported 00:07:43.848 Firmware Activate/Download: Not Supported 00:07:43.848 Namespace Management: Supported 00:07:43.848 Device Self-Test: Not Supported 00:07:43.848 Directives: Supported 00:07:43.848 NVMe-MI: Not Supported 00:07:43.848 Virtualization Management: Not Supported 00:07:43.848 Doorbell Buffer Config: Supported 00:07:43.848 Get LBA Status Capability: Not Supported 00:07:43.848 Command & Feature Lockdown Capability: Not Supported 00:07:43.848 Abort Command Limit: 4 00:07:43.848 Async Event Request Limit: 4 00:07:43.848 Number of Firmware Slots: N/A 00:07:43.848 Firmware Slot 1 Read-Only: N/A 00:07:43.848 Firmware Activation Without Reset: N/A 00:07:43.848 Multiple Update Detection Support: N/A 00:07:43.848 Firmware Update Granularity: No Information Provided 00:07:43.848 Per-Namespace SMART Log: Yes 00:07:43.848 Asymmetric Namespace Access Log Page: Not Supported 00:07:43.848 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:43.848 Command Effects Log Page: Supported 00:07:43.848 Get Log Page Extended Data: Supported 00:07:43.848 Telemetry Log Pages: Not Supported 00:07:43.848 Persistent Event Log Pages: Not Supported 00:07:43.848 Supported Log Pages Log Page: May Support 00:07:43.848 Commands Supported & Effects Log Page: Not Supported 00:07:43.848 Feature Identifiers & Effects Log Page:May Support 00:07:43.848 NVMe-MI Commands & Effects Log Page: May Support 00:07:43.848 Data Area 4 for Telemetry Log: Not Supported 00:07:43.848 Error Log Page Entries Supported: 1 00:07:43.848 Keep Alive: Not Supported 00:07:43.848 00:07:43.848 NVM Command Set Attributes 00:07:43.848 ========================== 00:07:43.848 Submission Queue Entry Size 00:07:43.848 Max: 64 00:07:43.848 Min: 64 00:07:43.848 Completion Queue Entry Size 00:07:43.848 Max: 16 00:07:43.848 Min: 16 00:07:43.848 Number of Namespaces: 256 00:07:43.848 Compare Command: Supported 00:07:43.848 Write Uncorrectable Command: Not Supported 00:07:43.848 Dataset Management Command: Supported 00:07:43.848 Write Zeroes Command: Supported 00:07:43.848 Set Features Save Field: Supported 00:07:43.848 Reservations: Not Supported 00:07:43.848 Timestamp: Supported 00:07:43.848 Copy: Supported 00:07:43.848 Volatile Write Cache: Present 00:07:43.848 Atomic Write Unit (Normal): 1 00:07:43.848 Atomic Write Unit (PFail): 1 00:07:43.848 Atomic Compare & Write Unit: 1 00:07:43.848 Fused Compare & Write: Not Supported 00:07:43.848 Scatter-Gather List 00:07:43.848 SGL Command Set: Supported 00:07:43.848 SGL Keyed: Not Supported 00:07:43.848 SGL Bit Bucket Descriptor: Not Supported 00:07:43.848 SGL Metadata Pointer: Not Supported 00:07:43.848 Oversized SGL: Not Supported 00:07:43.848 SGL Metadata Address: Not Supported 00:07:43.848 SGL Offset: Not Supported 00:07:43.848 Transport SGL Data Block: Not Supported 00:07:43.848 Replay Protected Memory Block: Not Supported 00:07:43.848 00:07:43.848 Firmware Slot Information 00:07:43.848 ========================= 00:07:43.848 Active slot: 1 00:07:43.848 Slot 1 Firmware Revision: 1.0 00:07:43.848 00:07:43.848 00:07:43.848 Commands Supported and Effects 00:07:43.848 ============================== 00:07:43.848 Admin Commands 00:07:43.848 -------------- 00:07:43.848 Delete I/O Submission Queue (00h): Supported 00:07:43.848 Create I/O Submission Queue (01h): Supported 00:07:43.848 Get Log Page (02h): Supported 00:07:43.848 Delete I/O Completion Queue (04h): Supported 00:07:43.848 Create I/O Completion Queue (05h): Supported 00:07:43.848 Identify (06h): Supported 00:07:43.848 Abort (08h): Supported 00:07:43.848 Set Features (09h): Supported 00:07:43.848 Get Features (0Ah): Supported 00:07:43.848 Asynchronous Event Request (0Ch): Supported 00:07:43.848 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:43.848 Directive Send (19h): Supported 00:07:43.848 Directive Receive (1Ah): Supported 00:07:43.848 Virtualization Management (1Ch): Supported 00:07:43.848 Doorbell Buffer Config (7Ch): Supported 00:07:43.848 Format NVM (80h): Supported LBA-Change 00:07:43.848 I/O Commands 00:07:43.848 ------------ 00:07:43.848 Flush (00h): Supported LBA-Change 00:07:43.848 Write (01h): Supported LBA-Change 00:07:43.848 Read (02h): Supported 00:07:43.848 Compare (05h): Supported 00:07:43.848 Write Zeroes (08h): Supported LBA-Change 00:07:43.848 Dataset Management (09h): Supported LBA-Change 00:07:43.848 Unknown (0Ch): Supported 00:07:43.848 Unknown (12h): Supported 00:07:43.848 Copy (19h): Supported LBA-Change 00:07:43.848 Unknown (1Dh): Supported LBA-Change 00:07:43.848 00:07:43.848 Error Log 00:07:43.848 ========= 00:07:43.848 00:07:43.848 Arbitration 00:07:43.848 =========== 00:07:43.848 Arbitration Burst: no limit 00:07:43.848 00:07:43.848 Power Management 00:07:43.848 ================ 00:07:43.848 Number of Power States: 1 00:07:43.848 Current Power State: Power State #0 00:07:43.848 Power State #0: 00:07:43.848 Max Power: 25.00 W 00:07:43.848 Non-Operational State: Operational 00:07:43.848 Entry Latency: 16 microseconds 00:07:43.848 Exit Latency: 4 microseconds 00:07:43.848 Relative Read Throughput: 0 00:07:43.848 Relative Read Latency: 0 00:07:43.848 Relative Write Throughput: 0 00:07:43.848 Relative Write Latency: 0 00:07:43.848 Idle Power: Not Reported 00:07:43.848 Active Power: Not Reported 00:07:43.848 Non-Operational Permissive Mode: Not Supported 00:07:43.848 00:07:43.848 Health Information 00:07:43.848 ================== 00:07:43.848 Critical Warnings: 00:07:43.848 Available Spare Space: OK 00:07:43.848 Temperature: OK 00:07:43.848 Device Reliability: OK 00:07:43.848 Read Only: No 00:07:43.848 Volatile Memory Backup: OK 00:07:43.848 Current Temperature: 323 Kelvin (50 Celsius) 00:07:43.848 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:43.848 Available Spare: 0% 00:07:43.848 Available Spare Threshold: 0% 00:07:43.848 Life Percentage Used: 0% 00:07:43.848 Data Units Read: 950 00:07:43.848 Data Units Written: 879 00:07:43.848 Host Read Commands: 41036 00:07:43.848 Host Write Commands: 40459 00:07:43.848 Controller Busy Time: 0 minutes 00:07:43.848 Power Cycles: 0 00:07:43.848 Power On Hours: 0 hours 00:07:43.848 Unsafe Shutdowns: 0 00:07:43.848 Unrecoverable Media Errors: 0 00:07:43.848 Lifetime Error Log Entries: 0 00:07:43.848 Warning Temperature Time: 0 minutes 00:07:43.848 Critical Temperature Time: 0 minutes 00:07:43.848 00:07:43.848 Number of Queues 00:07:43.848 ================ 00:07:43.848 Number of I/O Submission Queues: 64 00:07:43.848 Number of I/O Completion Queues: 64 00:07:43.848 00:07:43.848 ZNS Specific Controller Data 00:07:43.848 ============================ 00:07:43.848 Zone Append Size Limit: 0 00:07:43.848 00:07:43.848 00:07:43.848 Active Namespaces 00:07:43.848 ================= 00:07:43.848 Namespace ID:1 00:07:43.848 Error Recovery Timeout: Unlimited 00:07:43.848 Command Set Identifier: NVM (00h) 00:07:43.848 Deallocate: Supported 00:07:43.848 Deallocated/Unwritten Error: Supported 00:07:43.848 Deallocated Read Value: All 0x00 00:07:43.848 Deallocate in Write Zeroes: Not Supported 00:07:43.848 Deallocated Guard Field: 0xFFFF 00:07:43.848 Flush: Supported 00:07:43.848 Reservation: Not Supported 00:07:43.848 Namespace Sharing Capabilities: Multiple Controllers 00:07:43.848 Size (in LBAs): 262144 (1GiB) 00:07:43.848 Capacity (in LBAs): 262144 (1GiB) 00:07:43.848 Utilization (in LBAs): 262144 (1GiB) 00:07:43.848 Thin Provisioning: Not Supported 00:07:43.848 Per-NS Atomic Units: No 00:07:43.848 Maximum Single Source Range Length: 128 00:07:43.848 Maximum Copy Length: 128 00:07:43.848 Maximum Source Range Count: 128 00:07:43.848 NGUID/EUI64 Never Reused: No 00:07:43.848 Namespace Write Protected: No 00:07:43.848 Endurance group ID: 1 00:07:43.848 Number of LBA Formats: 8 00:07:43.848 Current LBA Format: LBA Format #04 00:07:43.848 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:43.848 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:43.849 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:43.849 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:43.849 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:43.849 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:43.849 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:43.849 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:43.849 00:07:43.849 Get Feature FDP: 00:07:43.849 ================ 00:07:43.849 Enabled: Yes 00:07:43.849 FDP configuration index: 0 00:07:43.849 00:07:43.849 FDP configurations log page 00:07:43.849 =========================== 00:07:43.849 Number of FDP configurations: 1 00:07:43.849 Version: 0 00:07:43.849 Size: 112 00:07:43.849 FDP Configuration Descriptor: 0 00:07:43.849 Descriptor Size: 96 00:07:43.849 Reclaim Group Identifier format: 2 00:07:43.849 FDP Volatile Write Cache: Not Present 00:07:43.849 FDP Configuration: Valid 00:07:43.849 Vendor Specific Size: 0 00:07:43.849 Number of Reclaim Groups: 2 00:07:43.849 Number of Recalim Unit Handles: 8 00:07:43.849 Max Placement Identifiers: 128 00:07:43.849 Number of Namespaces Suppprted: 256 00:07:43.849 Reclaim unit Nominal Size: 6000000 bytes 00:07:43.849 Estimated Reclaim Unit Time Limit: Not Reported 00:07:43.849 RUH Desc #000: RUH Type: Initially Isolated 00:07:43.849 RUH Desc #001: RUH Type: Initially Isolated 00:07:43.849 RUH Desc #002: RUH Type: Initially Isolated 00:07:43.849 RUH Desc #003: RUH Type: Initially Isolated 00:07:43.849 RUH Desc #004: RUH Type: Initially Isolated 00:07:43.849 RUH Desc #005: RUH Type: Initially Isolated 00:07:43.849 RUH Desc #006: RUH Type: Initially Isolated 00:07:43.849 RUH Desc #007: RUH Type: Initially Isolated 00:07:43.849 00:07:43.849 FDP reclaim unit handle usage log page 00:07:43.849 ====================================== 00:07:43.849 Number of Reclaim Unit Handles: 8 00:07:43.849 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:43.849 RUH Usage Desc #001: RUH Attributes: Unused 00:07:43.849 RUH Usage Desc #002: RUH Attributes: Unused 00:07:43.849 RUH Usage Desc #003: RUH Attributes: Unused 00:07:43.849 RUH Usage Desc #004: RUH Attributes: Unused 00:07:43.849 RUH Usage Desc #005: RUH Attributes: Unused 00:07:43.849 RUH Usage Desc #006: RUH Attributes: Unused 00:07:43.849 RUH Usage Desc #007: RUH Attributes: Unused 00:07:43.849 00:07:43.849 FDP statistics log page 00:07:43.849 ======================= 00:07:43.849 Host bytes with metadata written: 527212544 00:07:43.849 Media bytes with metadata written: 527269888 00:07:43.849 Media bytes erased: 0 00:07:43.849 00:07:43.849 FDP events log page 00:07:43.849 =================== 00:07:43.849 Number of FDP events: 0 00:07:43.849 00:07:43.849 NVM Specific Namespace Data 00:07:43.849 =========================== 00:07:43.849 Logical Block Storage Tag Mask: 0 00:07:43.849 Protection Information Capabilities: 00:07:43.849 16b Guard Protection Information Storage Tag Support: No 00:07:43.849 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:43.849 Storage Tag Check Read Support: No 00:07:43.849 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.849 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.849 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.849 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.849 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.849 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.849 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.849 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:43.849 00:07:43.849 real 0m1.353s 00:07:43.849 user 0m0.486s 00:07:43.849 sys 0m0.646s 00:07:43.849 12:33:43 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.849 ************************************ 00:07:43.849 END TEST nvme_identify 00:07:43.849 ************************************ 00:07:43.849 12:33:43 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:43.849 12:33:43 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:43.849 12:33:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.849 12:33:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.849 12:33:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.849 ************************************ 00:07:43.849 START TEST nvme_perf 00:07:43.849 ************************************ 00:07:43.849 12:33:43 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:43.849 12:33:43 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:45.237 Initializing NVMe Controllers 00:07:45.237 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:45.237 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:45.237 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:45.237 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:45.237 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:45.237 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:45.237 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:45.237 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:45.237 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:45.237 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:45.237 Initialization complete. Launching workers. 00:07:45.237 ======================================================== 00:07:45.237 Latency(us) 00:07:45.237 Device Information : IOPS MiB/s Average min max 00:07:45.237 PCIE (0000:00:13.0) NSID 1 from core 0: 7733.64 90.63 16579.47 13189.77 45838.40 00:07:45.237 PCIE (0000:00:10.0) NSID 1 from core 0: 7733.64 90.63 16548.55 13111.57 44347.28 00:07:45.237 PCIE (0000:00:11.0) NSID 1 from core 0: 7733.64 90.63 16515.62 12931.19 42682.17 00:07:45.237 PCIE (0000:00:12.0) NSID 1 from core 0: 7733.64 90.63 16477.28 11096.03 41455.28 00:07:45.237 PCIE (0000:00:12.0) NSID 2 from core 0: 7733.64 90.63 16435.51 10850.18 39481.73 00:07:45.237 PCIE (0000:00:12.0) NSID 3 from core 0: 7797.55 91.38 16266.86 9994.60 29771.54 00:07:45.237 ======================================================== 00:07:45.237 Total : 46465.74 544.52 16470.27 9994.60 45838.40 00:07:45.237 00:07:45.237 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:45.237 ================================================================================= 00:07:45.237 1.00000% : 14014.622us 00:07:45.237 10.00000% : 14922.043us 00:07:45.237 25.00000% : 15426.166us 00:07:45.237 50.00000% : 16131.938us 00:07:45.237 75.00000% : 16938.535us 00:07:45.237 90.00000% : 17845.957us 00:07:45.237 95.00000% : 18652.554us 00:07:45.237 98.00000% : 20467.397us 00:07:45.237 99.00000% : 36498.511us 00:07:45.237 99.50000% : 44766.129us 00:07:45.237 99.90000% : 45774.375us 00:07:45.237 99.99000% : 45976.025us 00:07:45.237 99.99900% : 45976.025us 00:07:45.237 99.99990% : 45976.025us 00:07:45.237 99.99999% : 45976.025us 00:07:45.237 00:07:45.237 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:45.238 ================================================================================= 00:07:45.238 1.00000% : 13812.972us 00:07:45.238 10.00000% : 14821.218us 00:07:45.238 25.00000% : 15426.166us 00:07:45.238 50.00000% : 16131.938us 00:07:45.238 75.00000% : 16938.535us 00:07:45.238 90.00000% : 17946.782us 00:07:45.238 95.00000% : 18652.554us 00:07:45.238 98.00000% : 20366.572us 00:07:45.238 99.00000% : 35490.265us 00:07:45.238 99.50000% : 43354.585us 00:07:45.238 99.90000% : 44161.182us 00:07:45.238 99.99000% : 44362.831us 00:07:45.238 99.99900% : 44362.831us 00:07:45.238 99.99990% : 44362.831us 00:07:45.238 99.99999% : 44362.831us 00:07:45.238 00:07:45.238 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:45.238 ================================================================================= 00:07:45.238 1.00000% : 13712.148us 00:07:45.238 10.00000% : 14922.043us 00:07:45.238 25.00000% : 15426.166us 00:07:45.238 50.00000% : 16131.938us 00:07:45.238 75.00000% : 16938.535us 00:07:45.238 90.00000% : 17946.782us 00:07:45.238 95.00000% : 18652.554us 00:07:45.238 98.00000% : 20568.222us 00:07:45.238 99.00000% : 33272.123us 00:07:45.238 99.50000% : 41539.742us 00:07:45.238 99.90000% : 42547.988us 00:07:45.238 99.99000% : 42749.637us 00:07:45.238 99.99900% : 42749.637us 00:07:45.238 99.99990% : 42749.637us 00:07:45.238 99.99999% : 42749.637us 00:07:45.238 00:07:45.238 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:45.238 ================================================================================= 00:07:45.238 1.00000% : 13712.148us 00:07:45.238 10.00000% : 14821.218us 00:07:45.238 25.00000% : 15426.166us 00:07:45.238 50.00000% : 16131.938us 00:07:45.238 75.00000% : 16938.535us 00:07:45.238 90.00000% : 17845.957us 00:07:45.238 95.00000% : 18753.378us 00:07:45.238 98.00000% : 20568.222us 00:07:45.238 99.00000% : 32465.526us 00:07:45.238 99.50000% : 40329.846us 00:07:45.238 99.90000% : 41338.092us 00:07:45.238 99.99000% : 41539.742us 00:07:45.238 99.99900% : 41539.742us 00:07:45.238 99.99990% : 41539.742us 00:07:45.238 99.99999% : 41539.742us 00:07:45.238 00:07:45.238 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:45.238 ================================================================================= 00:07:45.238 1.00000% : 13611.323us 00:07:45.238 10.00000% : 14821.218us 00:07:45.238 25.00000% : 15426.166us 00:07:45.238 50.00000% : 16131.938us 00:07:45.238 75.00000% : 16938.535us 00:07:45.238 90.00000% : 17946.782us 00:07:45.238 95.00000% : 18753.378us 00:07:45.238 98.00000% : 20467.397us 00:07:45.238 99.00000% : 29642.437us 00:07:45.238 99.50000% : 38111.705us 00:07:45.238 99.90000% : 39321.600us 00:07:45.238 99.99000% : 39523.249us 00:07:45.238 99.99900% : 39523.249us 00:07:45.238 99.99990% : 39523.249us 00:07:45.238 99.99999% : 39523.249us 00:07:45.238 00:07:45.238 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:45.238 ================================================================================= 00:07:45.238 1.00000% : 13308.849us 00:07:45.238 10.00000% : 14821.218us 00:07:45.238 25.00000% : 15426.166us 00:07:45.238 50.00000% : 16131.938us 00:07:45.238 75.00000% : 16938.535us 00:07:45.238 90.00000% : 17845.957us 00:07:45.238 95.00000% : 18753.378us 00:07:45.238 98.00000% : 19862.449us 00:07:45.238 99.00000% : 20870.695us 00:07:45.238 99.50000% : 28835.840us 00:07:45.238 99.90000% : 29642.437us 00:07:45.238 99.99000% : 29844.086us 00:07:45.238 99.99900% : 29844.086us 00:07:45.238 99.99990% : 29844.086us 00:07:45.238 99.99999% : 29844.086us 00:07:45.238 00:07:45.238 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:45.238 ============================================================================== 00:07:45.238 Range in us Cumulative IO count 00:07:45.238 13107.200 - 13208.025: 0.0129% ( 1) 00:07:45.238 13208.025 - 13308.849: 0.1550% ( 11) 00:07:45.238 13308.849 - 13409.674: 0.2066% ( 4) 00:07:45.238 13409.674 - 13510.498: 0.2583% ( 4) 00:07:45.238 13510.498 - 13611.323: 0.3357% ( 6) 00:07:45.238 13611.323 - 13712.148: 0.4649% ( 10) 00:07:45.238 13712.148 - 13812.972: 0.6973% ( 18) 00:07:45.238 13812.972 - 13913.797: 0.9814% ( 22) 00:07:45.238 13913.797 - 14014.622: 1.3559% ( 29) 00:07:45.238 14014.622 - 14115.446: 1.8724% ( 40) 00:07:45.238 14115.446 - 14216.271: 2.4664% ( 46) 00:07:45.238 14216.271 - 14317.095: 3.1508% ( 53) 00:07:45.238 14317.095 - 14417.920: 4.1064% ( 74) 00:07:45.238 14417.920 - 14518.745: 5.0103% ( 70) 00:07:45.238 14518.745 - 14619.569: 5.8755% ( 67) 00:07:45.238 14619.569 - 14720.394: 7.1927% ( 102) 00:07:45.238 14720.394 - 14821.218: 8.8197% ( 126) 00:07:45.238 14821.218 - 14922.043: 10.8342% ( 156) 00:07:45.238 14922.043 - 15022.868: 13.2619% ( 188) 00:07:45.238 15022.868 - 15123.692: 15.7670% ( 194) 00:07:45.238 15123.692 - 15224.517: 18.9437% ( 246) 00:07:45.238 15224.517 - 15325.342: 22.2624% ( 257) 00:07:45.238 15325.342 - 15426.166: 25.9298% ( 284) 00:07:45.238 15426.166 - 15526.991: 29.8683% ( 305) 00:07:45.238 15526.991 - 15627.815: 33.7035% ( 297) 00:07:45.238 15627.815 - 15728.640: 37.4225% ( 288) 00:07:45.238 15728.640 - 15829.465: 41.1157% ( 286) 00:07:45.238 15829.465 - 15930.289: 44.8864% ( 292) 00:07:45.238 15930.289 - 16031.114: 48.8765% ( 309) 00:07:45.238 16031.114 - 16131.938: 52.7763% ( 302) 00:07:45.238 16131.938 - 16232.763: 56.3017% ( 273) 00:07:45.238 16232.763 - 16333.588: 59.6462% ( 259) 00:07:45.238 16333.588 - 16434.412: 62.6808% ( 235) 00:07:45.238 16434.412 - 16535.237: 65.7154% ( 235) 00:07:45.238 16535.237 - 16636.062: 68.3626% ( 205) 00:07:45.238 16636.062 - 16736.886: 70.9323% ( 199) 00:07:45.238 16736.886 - 16837.711: 73.3084% ( 184) 00:07:45.238 16837.711 - 16938.535: 75.8652% ( 198) 00:07:45.238 16938.535 - 17039.360: 78.1767% ( 179) 00:07:45.238 17039.360 - 17140.185: 80.3461% ( 168) 00:07:45.238 17140.185 - 17241.009: 82.1410% ( 139) 00:07:45.238 17241.009 - 17341.834: 83.6777% ( 119) 00:07:45.238 17341.834 - 17442.658: 85.1498% ( 114) 00:07:45.238 17442.658 - 17543.483: 86.5702% ( 110) 00:07:45.238 17543.483 - 17644.308: 88.0940% ( 118) 00:07:45.238 17644.308 - 17745.132: 89.3853% ( 100) 00:07:45.238 17745.132 - 17845.957: 90.5733% ( 92) 00:07:45.238 17845.957 - 17946.782: 91.4644% ( 69) 00:07:45.238 17946.782 - 18047.606: 92.1617% ( 54) 00:07:45.238 18047.606 - 18148.431: 92.7686% ( 47) 00:07:45.238 18148.431 - 18249.255: 93.3368% ( 44) 00:07:45.238 18249.255 - 18350.080: 93.8791% ( 42) 00:07:45.238 18350.080 - 18450.905: 94.4086% ( 41) 00:07:45.238 18450.905 - 18551.729: 94.8605% ( 35) 00:07:45.238 18551.729 - 18652.554: 95.2608% ( 31) 00:07:45.238 18652.554 - 18753.378: 95.5837% ( 25) 00:07:45.238 18753.378 - 18854.203: 95.8549% ( 21) 00:07:45.238 18854.203 - 18955.028: 96.0873% ( 18) 00:07:45.238 18955.028 - 19055.852: 96.2552% ( 13) 00:07:45.238 19055.852 - 19156.677: 96.4489% ( 15) 00:07:45.238 19156.677 - 19257.502: 96.6426% ( 15) 00:07:45.238 19257.502 - 19358.326: 96.8363% ( 15) 00:07:45.238 19358.326 - 19459.151: 97.0687% ( 18) 00:07:45.238 19459.151 - 19559.975: 97.3011% ( 18) 00:07:45.238 19559.975 - 19660.800: 97.4561% ( 12) 00:07:45.238 19660.800 - 19761.625: 97.5852% ( 10) 00:07:45.238 19761.625 - 19862.449: 97.7014% ( 9) 00:07:45.238 19862.449 - 19963.274: 97.8048% ( 8) 00:07:45.238 19963.274 - 20064.098: 97.8693% ( 5) 00:07:45.238 20064.098 - 20164.923: 97.9210% ( 4) 00:07:45.238 20164.923 - 20265.748: 97.9468% ( 2) 00:07:45.238 20265.748 - 20366.572: 97.9985% ( 4) 00:07:45.238 20366.572 - 20467.397: 98.0630% ( 5) 00:07:45.238 20467.397 - 20568.222: 98.1147% ( 4) 00:07:45.238 20568.222 - 20669.046: 98.1663% ( 4) 00:07:45.238 20669.046 - 20769.871: 98.2309% ( 5) 00:07:45.238 20769.871 - 20870.695: 98.2825% ( 4) 00:07:45.238 20870.695 - 20971.520: 98.3471% ( 5) 00:07:45.238 34885.317 - 35086.966: 98.4375% ( 7) 00:07:45.238 35086.966 - 35288.615: 98.5150% ( 6) 00:07:45.238 35288.615 - 35490.265: 98.6054% ( 7) 00:07:45.238 35490.265 - 35691.914: 98.7087% ( 8) 00:07:45.238 35691.914 - 35893.563: 98.7991% ( 7) 00:07:45.238 35893.563 - 36095.212: 98.8895% ( 7) 00:07:45.238 36095.212 - 36296.862: 98.9669% ( 6) 00:07:45.238 36296.862 - 36498.511: 99.0444% ( 6) 00:07:45.238 36498.511 - 36700.160: 99.1348% ( 7) 00:07:45.238 36700.160 - 36901.809: 99.1736% ( 3) 00:07:45.238 43757.883 - 43959.532: 99.1865% ( 1) 00:07:45.238 43959.532 - 44161.182: 99.2639% ( 6) 00:07:45.238 44161.182 - 44362.831: 99.3543% ( 7) 00:07:45.238 44362.831 - 44564.480: 99.4576% ( 8) 00:07:45.238 44564.480 - 44766.129: 99.5480% ( 7) 00:07:45.238 44766.129 - 44967.778: 99.6384% ( 7) 00:07:45.238 44967.778 - 45169.428: 99.7159% ( 6) 00:07:45.238 45169.428 - 45371.077: 99.7934% ( 6) 00:07:45.238 45371.077 - 45572.726: 99.8838% ( 7) 00:07:45.238 45572.726 - 45774.375: 99.9742% ( 7) 00:07:45.238 45774.375 - 45976.025: 100.0000% ( 2) 00:07:45.238 00:07:45.238 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:45.238 ============================================================================== 00:07:45.238 Range in us Cumulative IO count 00:07:45.238 13107.200 - 13208.025: 0.0775% ( 6) 00:07:45.238 13208.025 - 13308.849: 0.1291% ( 4) 00:07:45.238 13308.849 - 13409.674: 0.2066% ( 6) 00:07:45.238 13409.674 - 13510.498: 0.3228% ( 9) 00:07:45.238 13510.498 - 13611.323: 0.4649% ( 11) 00:07:45.238 13611.323 - 13712.148: 0.7619% ( 23) 00:07:45.238 13712.148 - 13812.972: 1.1364% ( 29) 00:07:45.238 13812.972 - 13913.797: 1.6012% ( 36) 00:07:45.238 13913.797 - 14014.622: 2.0661% ( 36) 00:07:45.238 14014.622 - 14115.446: 2.7634% ( 54) 00:07:45.238 14115.446 - 14216.271: 3.5511% ( 61) 00:07:45.238 14216.271 - 14317.095: 4.3647% ( 63) 00:07:45.238 14317.095 - 14417.920: 5.1782% ( 63) 00:07:45.238 14417.920 - 14518.745: 6.4308% ( 97) 00:07:45.238 14518.745 - 14619.569: 7.7738% ( 104) 00:07:45.238 14619.569 - 14720.394: 9.3363% ( 121) 00:07:45.238 14720.394 - 14821.218: 11.6090% ( 176) 00:07:45.238 14821.218 - 14922.043: 13.4427% ( 142) 00:07:45.238 14922.043 - 15022.868: 15.4830% ( 158) 00:07:45.238 15022.868 - 15123.692: 18.0139% ( 196) 00:07:45.239 15123.692 - 15224.517: 20.8936% ( 223) 00:07:45.239 15224.517 - 15325.342: 23.8249% ( 227) 00:07:45.239 15325.342 - 15426.166: 27.2340% ( 264) 00:07:45.239 15426.166 - 15526.991: 30.6947% ( 268) 00:07:45.239 15526.991 - 15627.815: 34.4396% ( 290) 00:07:45.239 15627.815 - 15728.640: 37.8357% ( 263) 00:07:45.239 15728.640 - 15829.465: 41.7226% ( 301) 00:07:45.239 15829.465 - 15930.289: 45.2608% ( 274) 00:07:45.239 15930.289 - 16031.114: 48.8507% ( 278) 00:07:45.239 16031.114 - 16131.938: 52.4277% ( 277) 00:07:45.239 16131.938 - 16232.763: 56.0434% ( 280) 00:07:45.239 16232.763 - 16333.588: 59.2459% ( 248) 00:07:45.239 16333.588 - 16434.412: 62.6291% ( 262) 00:07:45.239 16434.412 - 16535.237: 65.7929% ( 245) 00:07:45.239 16535.237 - 16636.062: 68.3755% ( 200) 00:07:45.239 16636.062 - 16736.886: 70.9323% ( 198) 00:07:45.239 16736.886 - 16837.711: 73.0759% ( 166) 00:07:45.239 16837.711 - 16938.535: 75.3228% ( 174) 00:07:45.239 16938.535 - 17039.360: 77.3373% ( 156) 00:07:45.239 17039.360 - 17140.185: 79.2097% ( 145) 00:07:45.239 17140.185 - 17241.009: 81.1338% ( 149) 00:07:45.239 17241.009 - 17341.834: 82.7092% ( 122) 00:07:45.239 17341.834 - 17442.658: 84.1813% ( 114) 00:07:45.239 17442.658 - 17543.483: 85.4855% ( 101) 00:07:45.239 17543.483 - 17644.308: 86.7769% ( 100) 00:07:45.239 17644.308 - 17745.132: 87.9649% ( 92) 00:07:45.239 17745.132 - 17845.957: 89.1787% ( 94) 00:07:45.239 17845.957 - 17946.782: 90.1214% ( 73) 00:07:45.239 17946.782 - 18047.606: 91.2061% ( 84) 00:07:45.239 18047.606 - 18148.431: 91.8905% ( 53) 00:07:45.239 18148.431 - 18249.255: 92.7557% ( 67) 00:07:45.239 18249.255 - 18350.080: 93.4401% ( 53) 00:07:45.239 18350.080 - 18450.905: 94.1116% ( 52) 00:07:45.239 18450.905 - 18551.729: 94.6539% ( 42) 00:07:45.239 18551.729 - 18652.554: 95.0801% ( 33) 00:07:45.239 18652.554 - 18753.378: 95.4158% ( 26) 00:07:45.239 18753.378 - 18854.203: 95.8290% ( 32) 00:07:45.239 18854.203 - 18955.028: 96.0486% ( 17) 00:07:45.239 18955.028 - 19055.852: 96.1648% ( 9) 00:07:45.239 19055.852 - 19156.677: 96.3456% ( 14) 00:07:45.239 19156.677 - 19257.502: 96.5005% ( 12) 00:07:45.239 19257.502 - 19358.326: 96.6167% ( 9) 00:07:45.239 19358.326 - 19459.151: 96.7459% ( 10) 00:07:45.239 19459.151 - 19559.975: 96.9008% ( 12) 00:07:45.239 19559.975 - 19660.800: 97.0816% ( 14) 00:07:45.239 19660.800 - 19761.625: 97.2237% ( 11) 00:07:45.239 19761.625 - 19862.449: 97.3657% ( 11) 00:07:45.239 19862.449 - 19963.274: 97.5207% ( 12) 00:07:45.239 19963.274 - 20064.098: 97.7014% ( 14) 00:07:45.239 20064.098 - 20164.923: 97.8048% ( 8) 00:07:45.239 20164.923 - 20265.748: 97.9597% ( 12) 00:07:45.239 20265.748 - 20366.572: 98.0114% ( 4) 00:07:45.239 20366.572 - 20467.397: 98.0630% ( 4) 00:07:45.239 20467.397 - 20568.222: 98.1147% ( 4) 00:07:45.239 20568.222 - 20669.046: 98.1534% ( 3) 00:07:45.239 20669.046 - 20769.871: 98.2051% ( 4) 00:07:45.239 20769.871 - 20870.695: 98.2438% ( 3) 00:07:45.239 20870.695 - 20971.520: 98.2955% ( 4) 00:07:45.239 20971.520 - 21072.345: 98.3342% ( 3) 00:07:45.239 21072.345 - 21173.169: 98.3471% ( 1) 00:07:45.239 33473.772 - 33675.422: 98.3600% ( 1) 00:07:45.239 33675.422 - 33877.071: 98.4375% ( 6) 00:07:45.239 33877.071 - 34078.720: 98.4762% ( 3) 00:07:45.239 34078.720 - 34280.369: 98.5537% ( 6) 00:07:45.239 34280.369 - 34482.018: 98.6829% ( 10) 00:07:45.239 34482.018 - 34683.668: 98.7216% ( 3) 00:07:45.239 34683.668 - 34885.317: 98.7862% ( 5) 00:07:45.239 34885.317 - 35086.966: 98.8378% ( 4) 00:07:45.239 35086.966 - 35288.615: 98.9669% ( 10) 00:07:45.239 35288.615 - 35490.265: 99.0573% ( 7) 00:07:45.239 35490.265 - 35691.914: 99.1219% ( 5) 00:07:45.239 35691.914 - 35893.563: 99.1736% ( 4) 00:07:45.239 42144.689 - 42346.338: 99.1865% ( 1) 00:07:45.239 42346.338 - 42547.988: 99.2510% ( 5) 00:07:45.239 42547.988 - 42749.637: 99.3156% ( 5) 00:07:45.239 42749.637 - 42951.286: 99.4060% ( 7) 00:07:45.239 42951.286 - 43152.935: 99.4964% ( 7) 00:07:45.239 43152.935 - 43354.585: 99.5739% ( 6) 00:07:45.239 43354.585 - 43556.234: 99.6513% ( 6) 00:07:45.239 43556.234 - 43757.883: 99.7417% ( 7) 00:07:45.239 43757.883 - 43959.532: 99.8192% ( 6) 00:07:45.239 43959.532 - 44161.182: 99.9096% ( 7) 00:07:45.239 44161.182 - 44362.831: 100.0000% ( 7) 00:07:45.239 00:07:45.239 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:45.239 ============================================================================== 00:07:45.239 Range in us Cumulative IO count 00:07:45.239 12905.551 - 13006.375: 0.0775% ( 6) 00:07:45.239 13006.375 - 13107.200: 0.1679% ( 7) 00:07:45.239 13107.200 - 13208.025: 0.2066% ( 3) 00:07:45.239 13208.025 - 13308.849: 0.3357% ( 10) 00:07:45.239 13308.849 - 13409.674: 0.4520% ( 9) 00:07:45.239 13409.674 - 13510.498: 0.5811% ( 10) 00:07:45.239 13510.498 - 13611.323: 0.8264% ( 19) 00:07:45.239 13611.323 - 13712.148: 1.1105% ( 22) 00:07:45.239 13712.148 - 13812.972: 1.3042% ( 15) 00:07:45.239 13812.972 - 13913.797: 1.5367% ( 18) 00:07:45.239 13913.797 - 14014.622: 1.8982% ( 28) 00:07:45.239 14014.622 - 14115.446: 2.4019% ( 39) 00:07:45.239 14115.446 - 14216.271: 2.9959% ( 46) 00:07:45.239 14216.271 - 14317.095: 3.6286% ( 49) 00:07:45.239 14317.095 - 14417.920: 4.4551% ( 64) 00:07:45.239 14417.920 - 14518.745: 5.3590% ( 70) 00:07:45.239 14518.745 - 14619.569: 6.3533% ( 77) 00:07:45.239 14619.569 - 14720.394: 7.8383% ( 115) 00:07:45.239 14720.394 - 14821.218: 9.8786% ( 158) 00:07:45.239 14821.218 - 14922.043: 11.8414% ( 152) 00:07:45.239 14922.043 - 15022.868: 13.9721% ( 165) 00:07:45.239 15022.868 - 15123.692: 16.5935% ( 203) 00:07:45.239 15123.692 - 15224.517: 19.3053% ( 210) 00:07:45.239 15224.517 - 15325.342: 22.5594% ( 252) 00:07:45.239 15325.342 - 15426.166: 26.0201% ( 268) 00:07:45.239 15426.166 - 15526.991: 29.5713% ( 275) 00:07:45.239 15526.991 - 15627.815: 33.2128% ( 282) 00:07:45.239 15627.815 - 15728.640: 37.0480% ( 297) 00:07:45.239 15728.640 - 15829.465: 41.2707% ( 327) 00:07:45.239 15829.465 - 15930.289: 44.9638% ( 286) 00:07:45.239 15930.289 - 16031.114: 48.5408% ( 277) 00:07:45.239 16031.114 - 16131.938: 52.1436% ( 279) 00:07:45.239 16131.938 - 16232.763: 55.7593% ( 280) 00:07:45.239 16232.763 - 16333.588: 59.4008% ( 282) 00:07:45.239 16333.588 - 16434.412: 62.8487% ( 267) 00:07:45.239 16434.412 - 16535.237: 66.0124% ( 245) 00:07:45.239 16535.237 - 16636.062: 69.0857% ( 238) 00:07:45.239 16636.062 - 16736.886: 71.8104% ( 211) 00:07:45.239 16736.886 - 16837.711: 74.1477% ( 181) 00:07:45.239 16837.711 - 16938.535: 76.4979% ( 182) 00:07:45.239 16938.535 - 17039.360: 78.5899% ( 162) 00:07:45.239 17039.360 - 17140.185: 80.3719% ( 138) 00:07:45.239 17140.185 - 17241.009: 81.9602% ( 123) 00:07:45.239 17241.009 - 17341.834: 83.4323% ( 114) 00:07:45.239 17341.834 - 17442.658: 85.0207% ( 123) 00:07:45.239 17442.658 - 17543.483: 86.5315% ( 117) 00:07:45.239 17543.483 - 17644.308: 87.7066% ( 91) 00:07:45.239 17644.308 - 17745.132: 88.7526% ( 81) 00:07:45.239 17745.132 - 17845.957: 89.6823% ( 72) 00:07:45.239 17845.957 - 17946.782: 90.4571% ( 60) 00:07:45.239 17946.782 - 18047.606: 91.2190% ( 59) 00:07:45.239 18047.606 - 18148.431: 91.9680% ( 58) 00:07:45.239 18148.431 - 18249.255: 92.7686% ( 62) 00:07:45.239 18249.255 - 18350.080: 93.5692% ( 62) 00:07:45.239 18350.080 - 18450.905: 94.3440% ( 60) 00:07:45.239 18450.905 - 18551.729: 94.8864% ( 42) 00:07:45.239 18551.729 - 18652.554: 95.3642% ( 37) 00:07:45.239 18652.554 - 18753.378: 95.7515% ( 30) 00:07:45.239 18753.378 - 18854.203: 96.1777% ( 33) 00:07:45.239 18854.203 - 18955.028: 96.5134% ( 26) 00:07:45.239 18955.028 - 19055.852: 96.7975% ( 22) 00:07:45.239 19055.852 - 19156.677: 97.0558% ( 20) 00:07:45.239 19156.677 - 19257.502: 97.2495% ( 15) 00:07:45.239 19257.502 - 19358.326: 97.3399% ( 7) 00:07:45.239 19358.326 - 19459.151: 97.3915% ( 4) 00:07:45.239 19459.151 - 19559.975: 97.4561% ( 5) 00:07:45.239 19559.975 - 19660.800: 97.5077% ( 4) 00:07:45.239 19660.800 - 19761.625: 97.5207% ( 1) 00:07:45.239 19761.625 - 19862.449: 97.5594% ( 3) 00:07:45.239 19862.449 - 19963.274: 97.6756% ( 9) 00:07:45.239 19963.274 - 20064.098: 97.7144% ( 3) 00:07:45.239 20064.098 - 20164.923: 97.7789% ( 5) 00:07:45.239 20164.923 - 20265.748: 97.8435% ( 5) 00:07:45.239 20265.748 - 20366.572: 97.9081% ( 5) 00:07:45.239 20366.572 - 20467.397: 97.9597% ( 4) 00:07:45.239 20467.397 - 20568.222: 98.0114% ( 4) 00:07:45.239 20568.222 - 20669.046: 98.0759% ( 5) 00:07:45.239 20669.046 - 20769.871: 98.1405% ( 5) 00:07:45.239 20769.871 - 20870.695: 98.2180% ( 6) 00:07:45.239 20870.695 - 20971.520: 98.2825% ( 5) 00:07:45.239 20971.520 - 21072.345: 98.3213% ( 3) 00:07:45.239 21072.345 - 21173.169: 98.3471% ( 2) 00:07:45.239 31457.280 - 31658.929: 98.4117% ( 5) 00:07:45.239 31658.929 - 31860.578: 98.4892% ( 6) 00:07:45.239 31860.578 - 32062.228: 98.5666% ( 6) 00:07:45.239 32062.228 - 32263.877: 98.6441% ( 6) 00:07:45.239 32263.877 - 32465.526: 98.7216% ( 6) 00:07:45.239 32465.526 - 32667.175: 98.8120% ( 7) 00:07:45.239 32667.175 - 32868.825: 98.8895% ( 6) 00:07:45.239 32868.825 - 33070.474: 98.9799% ( 7) 00:07:45.239 33070.474 - 33272.123: 99.0573% ( 6) 00:07:45.239 33272.123 - 33473.772: 99.1348% ( 6) 00:07:45.239 33473.772 - 33675.422: 99.1736% ( 3) 00:07:45.239 40531.495 - 40733.145: 99.2381% ( 5) 00:07:45.239 40733.145 - 40934.794: 99.3285% ( 7) 00:07:45.239 40934.794 - 41136.443: 99.3931% ( 5) 00:07:45.239 41136.443 - 41338.092: 99.4835% ( 7) 00:07:45.239 41338.092 - 41539.742: 99.5610% ( 6) 00:07:45.239 41539.742 - 41741.391: 99.6384% ( 6) 00:07:45.239 41741.391 - 41943.040: 99.7159% ( 6) 00:07:45.239 41943.040 - 42144.689: 99.7934% ( 6) 00:07:45.239 42144.689 - 42346.338: 99.8580% ( 5) 00:07:45.239 42346.338 - 42547.988: 99.9354% ( 6) 00:07:45.239 42547.988 - 42749.637: 100.0000% ( 5) 00:07:45.239 00:07:45.239 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:45.239 ============================================================================== 00:07:45.239 Range in us Cumulative IO count 00:07:45.239 11090.708 - 11141.120: 0.0387% ( 3) 00:07:45.239 11141.120 - 11191.532: 0.0646% ( 2) 00:07:45.240 11191.532 - 11241.945: 0.1033% ( 3) 00:07:45.240 11241.945 - 11292.357: 0.1162% ( 1) 00:07:45.240 11292.357 - 11342.769: 0.1550% ( 3) 00:07:45.240 11342.769 - 11393.182: 0.1808% ( 2) 00:07:45.240 11393.182 - 11443.594: 0.2195% ( 3) 00:07:45.240 11443.594 - 11494.006: 0.2454% ( 2) 00:07:45.240 11494.006 - 11544.418: 0.2712% ( 2) 00:07:45.240 11544.418 - 11594.831: 0.3099% ( 3) 00:07:45.240 11594.831 - 11645.243: 0.3357% ( 2) 00:07:45.240 11645.243 - 11695.655: 0.3616% ( 2) 00:07:45.240 11695.655 - 11746.068: 0.4003% ( 3) 00:07:45.240 11746.068 - 11796.480: 0.4261% ( 2) 00:07:45.240 11796.480 - 11846.892: 0.4520% ( 2) 00:07:45.240 11846.892 - 11897.305: 0.4907% ( 3) 00:07:45.240 11897.305 - 11947.717: 0.5036% ( 1) 00:07:45.240 11947.717 - 11998.129: 0.5294% ( 2) 00:07:45.240 11998.129 - 12048.542: 0.5682% ( 3) 00:07:45.240 12048.542 - 12098.954: 0.5811% ( 1) 00:07:45.240 12098.954 - 12149.366: 0.5940% ( 1) 00:07:45.240 12149.366 - 12199.778: 0.6198% ( 2) 00:07:45.240 12199.778 - 12250.191: 0.6327% ( 1) 00:07:45.240 12250.191 - 12300.603: 0.6586% ( 2) 00:07:45.240 12300.603 - 12351.015: 0.6844% ( 2) 00:07:45.240 12351.015 - 12401.428: 0.6973% ( 1) 00:07:45.240 12401.428 - 12451.840: 0.7231% ( 2) 00:07:45.240 12451.840 - 12502.252: 0.7361% ( 1) 00:07:45.240 12502.252 - 12552.665: 0.7619% ( 2) 00:07:45.240 12552.665 - 12603.077: 0.7877% ( 2) 00:07:45.240 12653.489 - 12703.902: 0.8135% ( 2) 00:07:45.240 12703.902 - 12754.314: 0.8264% ( 1) 00:07:45.240 13409.674 - 13510.498: 0.8523% ( 2) 00:07:45.240 13510.498 - 13611.323: 0.9814% ( 10) 00:07:45.240 13611.323 - 13712.148: 1.1622% ( 14) 00:07:45.240 13712.148 - 13812.972: 1.3688% ( 16) 00:07:45.240 13812.972 - 13913.797: 1.6529% ( 22) 00:07:45.240 13913.797 - 14014.622: 2.1307% ( 37) 00:07:45.240 14014.622 - 14115.446: 2.6730% ( 42) 00:07:45.240 14115.446 - 14216.271: 3.2670% ( 46) 00:07:45.240 14216.271 - 14317.095: 4.1064% ( 65) 00:07:45.240 14317.095 - 14417.920: 4.7908% ( 53) 00:07:45.240 14417.920 - 14518.745: 5.8626% ( 83) 00:07:45.240 14518.745 - 14619.569: 7.2056% ( 104) 00:07:45.240 14619.569 - 14720.394: 8.8972% ( 131) 00:07:45.240 14720.394 - 14821.218: 11.0666% ( 168) 00:07:45.240 14821.218 - 14922.043: 13.4298% ( 183) 00:07:45.240 14922.043 - 15022.868: 15.8833% ( 190) 00:07:45.240 15022.868 - 15123.692: 18.2722% ( 185) 00:07:45.240 15123.692 - 15224.517: 20.9582% ( 208) 00:07:45.240 15224.517 - 15325.342: 23.7216% ( 214) 00:07:45.240 15325.342 - 15426.166: 26.8337% ( 241) 00:07:45.240 15426.166 - 15526.991: 30.1265% ( 255) 00:07:45.240 15526.991 - 15627.815: 34.0263% ( 302) 00:07:45.240 15627.815 - 15728.640: 37.9649% ( 305) 00:07:45.240 15728.640 - 15829.465: 41.6451% ( 285) 00:07:45.240 15829.465 - 15930.289: 45.1317% ( 270) 00:07:45.240 15930.289 - 16031.114: 48.7087% ( 277) 00:07:45.240 16031.114 - 16131.938: 52.1823% ( 269) 00:07:45.240 16131.938 - 16232.763: 55.4236% ( 251) 00:07:45.240 16232.763 - 16333.588: 58.5744% ( 244) 00:07:45.240 16333.588 - 16434.412: 61.7769% ( 248) 00:07:45.240 16434.412 - 16535.237: 64.6952% ( 226) 00:07:45.240 16535.237 - 16636.062: 67.6524% ( 229) 00:07:45.240 16636.062 - 16736.886: 70.4029% ( 213) 00:07:45.240 16736.886 - 16837.711: 73.0630% ( 206) 00:07:45.240 16837.711 - 16938.535: 75.5424% ( 192) 00:07:45.240 16938.535 - 17039.360: 77.8538% ( 179) 00:07:45.240 17039.360 - 17140.185: 79.8812% ( 157) 00:07:45.240 17140.185 - 17241.009: 81.9861% ( 163) 00:07:45.240 17241.009 - 17341.834: 83.8972% ( 148) 00:07:45.240 17341.834 - 17442.658: 85.6276% ( 134) 00:07:45.240 17442.658 - 17543.483: 87.1126% ( 115) 00:07:45.240 17543.483 - 17644.308: 88.3264% ( 94) 00:07:45.240 17644.308 - 17745.132: 89.4757% ( 89) 00:07:45.240 17745.132 - 17845.957: 90.4055% ( 72) 00:07:45.240 17845.957 - 17946.782: 91.2190% ( 63) 00:07:45.240 17946.782 - 18047.606: 91.8518% ( 49) 00:07:45.240 18047.606 - 18148.431: 92.4458% ( 46) 00:07:45.240 18148.431 - 18249.255: 93.1043% ( 51) 00:07:45.240 18249.255 - 18350.080: 93.6983% ( 46) 00:07:45.240 18350.080 - 18450.905: 94.1632% ( 36) 00:07:45.240 18450.905 - 18551.729: 94.6023% ( 34) 00:07:45.240 18551.729 - 18652.554: 94.9897% ( 30) 00:07:45.240 18652.554 - 18753.378: 95.2996% ( 24) 00:07:45.240 18753.378 - 18854.203: 95.7386% ( 34) 00:07:45.240 18854.203 - 18955.028: 96.0873% ( 27) 00:07:45.240 18955.028 - 19055.852: 96.4618% ( 29) 00:07:45.240 19055.852 - 19156.677: 96.7071% ( 19) 00:07:45.240 19156.677 - 19257.502: 96.9008% ( 15) 00:07:45.240 19257.502 - 19358.326: 97.0429% ( 11) 00:07:45.240 19358.326 - 19459.151: 97.1591% ( 9) 00:07:45.240 19459.151 - 19559.975: 97.2366% ( 6) 00:07:45.240 19559.975 - 19660.800: 97.3011% ( 5) 00:07:45.240 19660.800 - 19761.625: 97.3657% ( 5) 00:07:45.240 19761.625 - 19862.449: 97.4819% ( 9) 00:07:45.240 19862.449 - 19963.274: 97.5981% ( 9) 00:07:45.240 19963.274 - 20064.098: 97.7273% ( 10) 00:07:45.240 20064.098 - 20164.923: 97.7789% ( 4) 00:07:45.240 20164.923 - 20265.748: 97.8435% ( 5) 00:07:45.240 20265.748 - 20366.572: 97.9210% ( 6) 00:07:45.240 20366.572 - 20467.397: 97.9855% ( 5) 00:07:45.240 20467.397 - 20568.222: 98.0501% ( 5) 00:07:45.240 20568.222 - 20669.046: 98.1147% ( 5) 00:07:45.240 20669.046 - 20769.871: 98.1663% ( 4) 00:07:45.240 20769.871 - 20870.695: 98.2309% ( 5) 00:07:45.240 20870.695 - 20971.520: 98.2825% ( 4) 00:07:45.240 20971.520 - 21072.345: 98.3213% ( 3) 00:07:45.240 21072.345 - 21173.169: 98.3471% ( 2) 00:07:45.240 30449.034 - 30650.683: 98.3729% ( 2) 00:07:45.240 30650.683 - 30852.332: 98.4375% ( 5) 00:07:45.240 30852.332 - 31053.982: 98.4762% ( 3) 00:07:45.240 31053.982 - 31255.631: 98.5537% ( 6) 00:07:45.240 31255.631 - 31457.280: 98.6312% ( 6) 00:07:45.240 31457.280 - 31658.929: 98.7087% ( 6) 00:07:45.240 31658.929 - 31860.578: 98.7862% ( 6) 00:07:45.240 31860.578 - 32062.228: 98.8765% ( 7) 00:07:45.240 32062.228 - 32263.877: 98.9540% ( 6) 00:07:45.240 32263.877 - 32465.526: 99.0444% ( 7) 00:07:45.240 32465.526 - 32667.175: 99.1219% ( 6) 00:07:45.240 32667.175 - 32868.825: 99.1477% ( 2) 00:07:45.240 32868.825 - 33070.474: 99.1736% ( 2) 00:07:45.240 39119.951 - 39321.600: 99.1994% ( 2) 00:07:45.240 39321.600 - 39523.249: 99.2639% ( 5) 00:07:45.240 39523.249 - 39724.898: 99.3156% ( 4) 00:07:45.240 39724.898 - 39926.548: 99.3802% ( 5) 00:07:45.240 39926.548 - 40128.197: 99.4576% ( 6) 00:07:45.240 40128.197 - 40329.846: 99.5351% ( 6) 00:07:45.240 40329.846 - 40531.495: 99.6126% ( 6) 00:07:45.240 40531.495 - 40733.145: 99.6901% ( 6) 00:07:45.240 40733.145 - 40934.794: 99.7676% ( 6) 00:07:45.240 40934.794 - 41136.443: 99.8580% ( 7) 00:07:45.240 41136.443 - 41338.092: 99.9483% ( 7) 00:07:45.240 41338.092 - 41539.742: 100.0000% ( 4) 00:07:45.240 00:07:45.240 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:45.240 ============================================================================== 00:07:45.240 Range in us Cumulative IO count 00:07:45.240 10838.646 - 10889.058: 0.0258% ( 2) 00:07:45.240 10889.058 - 10939.471: 0.0646% ( 3) 00:07:45.240 10939.471 - 10989.883: 0.0904% ( 2) 00:07:45.240 10989.883 - 11040.295: 0.1291% ( 3) 00:07:45.240 11040.295 - 11090.708: 0.1550% ( 2) 00:07:45.240 11090.708 - 11141.120: 0.1937% ( 3) 00:07:45.240 11141.120 - 11191.532: 0.2195% ( 2) 00:07:45.240 11191.532 - 11241.945: 0.2583% ( 3) 00:07:45.240 11241.945 - 11292.357: 0.2841% ( 2) 00:07:45.240 11292.357 - 11342.769: 0.3228% ( 3) 00:07:45.240 11342.769 - 11393.182: 0.3487% ( 2) 00:07:45.240 11393.182 - 11443.594: 0.3745% ( 2) 00:07:45.240 11443.594 - 11494.006: 0.4132% ( 3) 00:07:45.240 11494.006 - 11544.418: 0.4390% ( 2) 00:07:45.240 11544.418 - 11594.831: 0.4649% ( 2) 00:07:45.240 11594.831 - 11645.243: 0.4907% ( 2) 00:07:45.240 11645.243 - 11695.655: 0.5294% ( 3) 00:07:45.240 11695.655 - 11746.068: 0.5682% ( 3) 00:07:45.240 11746.068 - 11796.480: 0.5940% ( 2) 00:07:45.240 11796.480 - 11846.892: 0.6327% ( 3) 00:07:45.240 11846.892 - 11897.305: 0.6715% ( 3) 00:07:45.240 11897.305 - 11947.717: 0.6973% ( 2) 00:07:45.240 11947.717 - 11998.129: 0.7361% ( 3) 00:07:45.240 11998.129 - 12048.542: 0.7748% ( 3) 00:07:45.240 12048.542 - 12098.954: 0.8006% ( 2) 00:07:45.240 12098.954 - 12149.366: 0.8264% ( 2) 00:07:45.240 13409.674 - 13510.498: 0.8781% ( 4) 00:07:45.240 13510.498 - 13611.323: 1.0072% ( 10) 00:07:45.240 13611.323 - 13712.148: 1.0976% ( 7) 00:07:45.240 13712.148 - 13812.972: 1.3817% ( 22) 00:07:45.240 13812.972 - 13913.797: 1.7304% ( 27) 00:07:45.240 13913.797 - 14014.622: 2.2211% ( 38) 00:07:45.240 14014.622 - 14115.446: 2.7376% ( 40) 00:07:45.240 14115.446 - 14216.271: 3.2929% ( 43) 00:07:45.240 14216.271 - 14317.095: 3.9644% ( 52) 00:07:45.240 14317.095 - 14417.920: 4.8037% ( 65) 00:07:45.240 14417.920 - 14518.745: 5.9788% ( 91) 00:07:45.240 14518.745 - 14619.569: 7.2314% ( 97) 00:07:45.240 14619.569 - 14720.394: 8.7810% ( 120) 00:07:45.240 14720.394 - 14821.218: 10.5888% ( 140) 00:07:45.240 14821.218 - 14922.043: 12.7324% ( 166) 00:07:45.240 14922.043 - 15022.868: 14.9664% ( 173) 00:07:45.240 15022.868 - 15123.692: 17.4199% ( 190) 00:07:45.240 15123.692 - 15224.517: 20.0284% ( 202) 00:07:45.240 15224.517 - 15325.342: 23.1018% ( 238) 00:07:45.240 15325.342 - 15426.166: 26.4721% ( 261) 00:07:45.240 15426.166 - 15526.991: 29.7392% ( 253) 00:07:45.240 15526.991 - 15627.815: 33.1612% ( 265) 00:07:45.240 15627.815 - 15728.640: 36.8285% ( 284) 00:07:45.240 15728.640 - 15829.465: 40.6508% ( 296) 00:07:45.240 15829.465 - 15930.289: 44.4215% ( 292) 00:07:45.240 15930.289 - 16031.114: 48.1018% ( 285) 00:07:45.240 16031.114 - 16131.938: 51.6787% ( 277) 00:07:45.240 16131.938 - 16232.763: 55.5398% ( 299) 00:07:45.240 16232.763 - 16333.588: 59.2459% ( 287) 00:07:45.240 16333.588 - 16434.412: 62.7841% ( 274) 00:07:45.240 16434.412 - 16535.237: 66.1544% ( 261) 00:07:45.240 16535.237 - 16636.062: 69.5119% ( 260) 00:07:45.240 16636.062 - 16736.886: 72.3399% ( 219) 00:07:45.240 16736.886 - 16837.711: 74.7934% ( 190) 00:07:45.240 16837.711 - 16938.535: 76.8853% ( 162) 00:07:45.241 16938.535 - 17039.360: 79.0160% ( 165) 00:07:45.241 17039.360 - 17140.185: 81.0692% ( 159) 00:07:45.241 17140.185 - 17241.009: 82.8771% ( 140) 00:07:45.241 17241.009 - 17341.834: 84.2588% ( 107) 00:07:45.241 17341.834 - 17442.658: 85.3048% ( 81) 00:07:45.241 17442.658 - 17543.483: 86.4540% ( 89) 00:07:45.241 17543.483 - 17644.308: 87.4871% ( 80) 00:07:45.241 17644.308 - 17745.132: 88.5589% ( 83) 00:07:45.241 17745.132 - 17845.957: 89.5274% ( 75) 00:07:45.241 17845.957 - 17946.782: 90.5604% ( 80) 00:07:45.241 17946.782 - 18047.606: 91.3998% ( 65) 00:07:45.241 18047.606 - 18148.431: 92.1229% ( 56) 00:07:45.241 18148.431 - 18249.255: 92.8461% ( 56) 00:07:45.241 18249.255 - 18350.080: 93.4788% ( 49) 00:07:45.241 18350.080 - 18450.905: 93.9824% ( 39) 00:07:45.241 18450.905 - 18551.729: 94.4473% ( 36) 00:07:45.241 18551.729 - 18652.554: 94.9768% ( 41) 00:07:45.241 18652.554 - 18753.378: 95.4545% ( 37) 00:07:45.241 18753.378 - 18854.203: 95.8807% ( 33) 00:07:45.241 18854.203 - 18955.028: 96.2293% ( 27) 00:07:45.241 18955.028 - 19055.852: 96.4360% ( 16) 00:07:45.241 19055.852 - 19156.677: 96.5651% ( 10) 00:07:45.241 19156.677 - 19257.502: 96.7071% ( 11) 00:07:45.241 19257.502 - 19358.326: 96.8492% ( 11) 00:07:45.241 19358.326 - 19459.151: 96.9783% ( 10) 00:07:45.241 19459.151 - 19559.975: 97.1591% ( 14) 00:07:45.241 19559.975 - 19660.800: 97.3528% ( 15) 00:07:45.241 19660.800 - 19761.625: 97.4561% ( 8) 00:07:45.241 19761.625 - 19862.449: 97.5594% ( 8) 00:07:45.241 19862.449 - 19963.274: 97.6627% ( 8) 00:07:45.241 19963.274 - 20064.098: 97.7660% ( 8) 00:07:45.241 20064.098 - 20164.923: 97.8564% ( 7) 00:07:45.241 20164.923 - 20265.748: 97.9210% ( 5) 00:07:45.241 20265.748 - 20366.572: 97.9726% ( 4) 00:07:45.241 20366.572 - 20467.397: 98.0372% ( 5) 00:07:45.241 20467.397 - 20568.222: 98.1018% ( 5) 00:07:45.241 20568.222 - 20669.046: 98.1663% ( 5) 00:07:45.241 20669.046 - 20769.871: 98.2309% ( 5) 00:07:45.241 20769.871 - 20870.695: 98.2955% ( 5) 00:07:45.241 20870.695 - 20971.520: 98.3471% ( 4) 00:07:45.241 28029.243 - 28230.892: 98.4246% ( 6) 00:07:45.241 28230.892 - 28432.542: 98.5150% ( 7) 00:07:45.241 28432.542 - 28634.191: 98.6054% ( 7) 00:07:45.241 28634.191 - 28835.840: 98.6958% ( 7) 00:07:45.241 28835.840 - 29037.489: 98.7732% ( 6) 00:07:45.241 29037.489 - 29239.138: 98.8507% ( 6) 00:07:45.241 29239.138 - 29440.788: 98.9282% ( 6) 00:07:45.241 29440.788 - 29642.437: 99.0057% ( 6) 00:07:45.241 29642.437 - 29844.086: 99.0961% ( 7) 00:07:45.241 29844.086 - 30045.735: 99.1736% ( 6) 00:07:45.241 36901.809 - 37103.458: 99.1994% ( 2) 00:07:45.241 37103.458 - 37305.108: 99.2639% ( 5) 00:07:45.241 37305.108 - 37506.757: 99.3285% ( 5) 00:07:45.241 37506.757 - 37708.406: 99.3931% ( 5) 00:07:45.241 37708.406 - 37910.055: 99.4835% ( 7) 00:07:45.241 37910.055 - 38111.705: 99.5739% ( 7) 00:07:45.241 38111.705 - 38313.354: 99.6513% ( 6) 00:07:45.241 38313.354 - 38515.003: 99.6772% ( 2) 00:07:45.241 38515.003 - 38716.652: 99.7030% ( 2) 00:07:45.241 38716.652 - 38918.302: 99.7805% ( 6) 00:07:45.241 38918.302 - 39119.951: 99.8580% ( 6) 00:07:45.241 39119.951 - 39321.600: 99.9354% ( 6) 00:07:45.241 39321.600 - 39523.249: 100.0000% ( 5) 00:07:45.241 00:07:45.241 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:45.241 ============================================================================== 00:07:45.241 Range in us Cumulative IO count 00:07:45.241 9981.637 - 10032.049: 0.0128% ( 1) 00:07:45.241 10032.049 - 10082.462: 0.0897% ( 6) 00:07:45.241 10082.462 - 10132.874: 0.1537% ( 5) 00:07:45.241 10132.874 - 10183.286: 0.1665% ( 1) 00:07:45.241 10183.286 - 10233.698: 0.1793% ( 1) 00:07:45.241 10233.698 - 10284.111: 0.2049% ( 2) 00:07:45.241 10284.111 - 10334.523: 0.2305% ( 2) 00:07:45.241 10334.523 - 10384.935: 0.2690% ( 3) 00:07:45.241 10384.935 - 10435.348: 0.3074% ( 3) 00:07:45.241 10435.348 - 10485.760: 0.3458% ( 3) 00:07:45.241 10485.760 - 10536.172: 0.3842% ( 3) 00:07:45.241 10536.172 - 10586.585: 0.3970% ( 1) 00:07:45.241 10586.585 - 10636.997: 0.4226% ( 2) 00:07:45.241 10636.997 - 10687.409: 0.4483% ( 2) 00:07:45.241 10687.409 - 10737.822: 0.4739% ( 2) 00:07:45.241 10737.822 - 10788.234: 0.4995% ( 2) 00:07:45.241 10788.234 - 10838.646: 0.5251% ( 2) 00:07:45.241 10838.646 - 10889.058: 0.5507% ( 2) 00:07:45.241 10889.058 - 10939.471: 0.5763% ( 2) 00:07:45.241 10939.471 - 10989.883: 0.6019% ( 2) 00:07:45.241 10989.883 - 11040.295: 0.6276% ( 2) 00:07:45.241 11040.295 - 11090.708: 0.6532% ( 2) 00:07:45.241 11090.708 - 11141.120: 0.6916% ( 3) 00:07:45.241 11141.120 - 11191.532: 0.7172% ( 2) 00:07:45.241 11191.532 - 11241.945: 0.7556% ( 3) 00:07:45.241 11241.945 - 11292.357: 0.7812% ( 2) 00:07:45.241 11292.357 - 11342.769: 0.8069% ( 2) 00:07:45.241 11342.769 - 11393.182: 0.8197% ( 1) 00:07:45.241 13006.375 - 13107.200: 0.8709% ( 4) 00:07:45.241 13107.200 - 13208.025: 0.9349% ( 5) 00:07:45.241 13208.025 - 13308.849: 1.0118% ( 6) 00:07:45.241 13308.849 - 13409.674: 1.0758% ( 5) 00:07:45.241 13409.674 - 13510.498: 1.1270% ( 4) 00:07:45.241 13510.498 - 13611.323: 1.1911% ( 5) 00:07:45.241 13611.323 - 13712.148: 1.2551% ( 5) 00:07:45.241 13712.148 - 13812.972: 1.3576% ( 8) 00:07:45.241 13812.972 - 13913.797: 1.5241% ( 13) 00:07:45.241 13913.797 - 14014.622: 1.8058% ( 22) 00:07:45.241 14014.622 - 14115.446: 2.2541% ( 35) 00:07:45.241 14115.446 - 14216.271: 2.8048% ( 43) 00:07:45.241 14216.271 - 14317.095: 3.6885% ( 69) 00:07:45.241 14317.095 - 14417.920: 4.9180% ( 96) 00:07:45.241 14417.920 - 14518.745: 6.2116% ( 101) 00:07:45.241 14518.745 - 14619.569: 7.5051% ( 101) 00:07:45.241 14619.569 - 14720.394: 9.0676% ( 122) 00:07:45.241 14720.394 - 14821.218: 10.6557% ( 124) 00:07:45.241 14821.218 - 14922.043: 12.4616% ( 141) 00:07:45.241 14922.043 - 15022.868: 14.6260% ( 169) 00:07:45.241 15022.868 - 15123.692: 17.2515% ( 205) 00:07:45.241 15123.692 - 15224.517: 20.2228% ( 232) 00:07:45.241 15224.517 - 15325.342: 23.3607% ( 245) 00:07:45.241 15325.342 - 15426.166: 26.8186% ( 270) 00:07:45.241 15426.166 - 15526.991: 30.3407% ( 275) 00:07:45.241 15526.991 - 15627.815: 33.5297% ( 249) 00:07:45.241 15627.815 - 15728.640: 37.1798% ( 285) 00:07:45.241 15728.640 - 15829.465: 41.0220% ( 300) 00:07:45.241 15829.465 - 15930.289: 44.8514% ( 299) 00:07:45.241 15930.289 - 16031.114: 48.4887% ( 284) 00:07:45.241 16031.114 - 16131.938: 52.1388% ( 285) 00:07:45.241 16131.938 - 16232.763: 56.3909% ( 332) 00:07:45.241 16232.763 - 16333.588: 59.7336% ( 261) 00:07:45.241 16333.588 - 16434.412: 62.9995% ( 255) 00:07:45.241 16434.412 - 16535.237: 65.9324% ( 229) 00:07:45.241 16535.237 - 16636.062: 68.8012% ( 224) 00:07:45.241 16636.062 - 16736.886: 71.5804% ( 217) 00:07:45.241 16736.886 - 16837.711: 74.5389% ( 231) 00:07:45.241 16837.711 - 16938.535: 77.0492% ( 196) 00:07:45.241 16938.535 - 17039.360: 79.1752% ( 166) 00:07:45.241 17039.360 - 17140.185: 81.1219% ( 152) 00:07:45.241 17140.185 - 17241.009: 82.7869% ( 130) 00:07:45.241 17241.009 - 17341.834: 84.3366% ( 121) 00:07:45.241 17341.834 - 17442.658: 85.8222% ( 116) 00:07:45.241 17442.658 - 17543.483: 87.0133% ( 93) 00:07:45.241 17543.483 - 17644.308: 88.2812% ( 99) 00:07:45.241 17644.308 - 17745.132: 89.4980% ( 95) 00:07:45.241 17745.132 - 17845.957: 90.5482% ( 82) 00:07:45.241 17845.957 - 17946.782: 91.3038% ( 59) 00:07:45.241 17946.782 - 18047.606: 91.9698% ( 52) 00:07:45.241 18047.606 - 18148.431: 92.6358% ( 52) 00:07:45.241 18148.431 - 18249.255: 93.1865% ( 43) 00:07:45.241 18249.255 - 18350.080: 93.5963% ( 32) 00:07:45.241 18350.080 - 18450.905: 94.0061% ( 32) 00:07:45.241 18450.905 - 18551.729: 94.3519% ( 27) 00:07:45.241 18551.729 - 18652.554: 94.7234% ( 29) 00:07:45.241 18652.554 - 18753.378: 95.1588% ( 34) 00:07:45.241 18753.378 - 18854.203: 95.5302% ( 29) 00:07:45.241 18854.203 - 18955.028: 95.8760% ( 27) 00:07:45.241 18955.028 - 19055.852: 96.1194% ( 19) 00:07:45.241 19055.852 - 19156.677: 96.3755% ( 20) 00:07:45.241 19156.677 - 19257.502: 96.6189% ( 19) 00:07:45.241 19257.502 - 19358.326: 96.9006% ( 22) 00:07:45.241 19358.326 - 19459.151: 97.1952% ( 23) 00:07:45.241 19459.151 - 19559.975: 97.4641% ( 21) 00:07:45.241 19559.975 - 19660.800: 97.7459% ( 22) 00:07:45.241 19660.800 - 19761.625: 97.9764% ( 18) 00:07:45.241 19761.625 - 19862.449: 98.1942% ( 17) 00:07:45.241 19862.449 - 19963.274: 98.3478% ( 12) 00:07:45.241 19963.274 - 20064.098: 98.4503% ( 8) 00:07:45.241 20064.098 - 20164.923: 98.5528% ( 8) 00:07:45.241 20164.923 - 20265.748: 98.6296% ( 6) 00:07:45.241 20265.748 - 20366.572: 98.7449% ( 9) 00:07:45.241 20366.572 - 20467.397: 98.8345% ( 7) 00:07:45.241 20467.397 - 20568.222: 98.9114% ( 6) 00:07:45.241 20568.222 - 20669.046: 98.9498% ( 3) 00:07:45.241 20669.046 - 20769.871: 98.9882% ( 3) 00:07:45.241 20769.871 - 20870.695: 99.0266% ( 3) 00:07:45.241 20870.695 - 20971.520: 99.0651% ( 3) 00:07:45.241 20971.520 - 21072.345: 99.1035% ( 3) 00:07:45.241 21072.345 - 21173.169: 99.1419% ( 3) 00:07:45.241 21173.169 - 21273.994: 99.1803% ( 3) 00:07:45.241 27827.594 - 28029.243: 99.2059% ( 2) 00:07:45.241 28029.243 - 28230.892: 99.2956% ( 7) 00:07:45.241 28230.892 - 28432.542: 99.3852% ( 7) 00:07:45.241 28432.542 - 28634.191: 99.4749% ( 7) 00:07:45.241 28634.191 - 28835.840: 99.5774% ( 8) 00:07:45.241 28835.840 - 29037.489: 99.6670% ( 7) 00:07:45.241 29037.489 - 29239.138: 99.7567% ( 7) 00:07:45.241 29239.138 - 29440.788: 99.8463% ( 7) 00:07:45.241 29440.788 - 29642.437: 99.9360% ( 7) 00:07:45.241 29642.437 - 29844.086: 100.0000% ( 5) 00:07:45.241 00:07:45.241 12:33:44 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:46.629 Initializing NVMe Controllers 00:07:46.629 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:46.629 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:46.629 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:46.629 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:46.629 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:46.629 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:46.629 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:46.629 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:46.629 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:46.629 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:46.629 Initialization complete. Launching workers. 00:07:46.629 ======================================================== 00:07:46.629 Latency(us) 00:07:46.629 Device Information : IOPS MiB/s Average min max 00:07:46.629 PCIE (0000:00:13.0) NSID 1 from core 0: 7615.98 89.25 17468.89 12113.36 89935.49 00:07:46.629 PCIE (0000:00:10.0) NSID 1 from core 0: 7615.98 89.25 17342.46 10578.32 85075.55 00:07:46.629 PCIE (0000:00:11.0) NSID 1 from core 0: 7615.98 89.25 17302.74 9670.07 85001.17 00:07:46.629 PCIE (0000:00:12.0) NSID 1 from core 0: 7585.98 88.90 17337.34 7738.05 89874.67 00:07:46.629 PCIE (0000:00:12.0) NSID 2 from core 0: 7551.98 88.50 17383.44 13287.45 90255.56 00:07:46.629 PCIE (0000:00:12.0) NSID 3 from core 0: 7615.98 89.25 17205.82 13168.79 89509.76 00:07:46.629 ======================================================== 00:07:46.629 Total : 45601.86 534.40 17340.05 7738.05 90255.56 00:07:46.629 00:07:46.629 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:46.629 ================================================================================= 00:07:46.629 1.00000% : 13409.674us 00:07:46.629 10.00000% : 14317.095us 00:07:46.629 25.00000% : 14821.218us 00:07:46.629 50.00000% : 15526.991us 00:07:46.630 75.00000% : 16736.886us 00:07:46.630 90.00000% : 19055.852us 00:07:46.630 95.00000% : 21677.292us 00:07:46.630 98.00000% : 47185.920us 00:07:46.630 99.00000% : 84692.677us 00:07:46.630 99.50000% : 85499.274us 00:07:46.630 99.90000% : 89935.557us 00:07:46.630 99.99000% : 89935.557us 00:07:46.630 99.99900% : 89935.557us 00:07:46.630 99.99990% : 89935.557us 00:07:46.630 99.99999% : 89935.557us 00:07:46.630 00:07:46.630 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:46.630 ================================================================================= 00:07:46.630 1.00000% : 13409.674us 00:07:46.630 10.00000% : 14317.095us 00:07:46.630 25.00000% : 14821.218us 00:07:46.630 50.00000% : 15526.991us 00:07:46.630 75.00000% : 16736.886us 00:07:46.630 90.00000% : 18854.203us 00:07:46.630 95.00000% : 21273.994us 00:07:46.630 98.00000% : 45371.077us 00:07:46.630 99.00000% : 79449.797us 00:07:46.630 99.50000% : 81062.991us 00:07:46.630 99.90000% : 85095.975us 00:07:46.630 99.99000% : 85095.975us 00:07:46.630 99.99900% : 85095.975us 00:07:46.630 99.99990% : 85095.975us 00:07:46.630 99.99999% : 85095.975us 00:07:46.630 00:07:46.630 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:46.630 ================================================================================= 00:07:46.630 1.00000% : 13510.498us 00:07:46.630 10.00000% : 14317.095us 00:07:46.630 25.00000% : 14922.043us 00:07:46.630 50.00000% : 15526.991us 00:07:46.630 75.00000% : 16736.886us 00:07:46.630 90.00000% : 18955.028us 00:07:46.630 95.00000% : 20971.520us 00:07:46.630 98.00000% : 43556.234us 00:07:46.630 99.00000% : 79853.095us 00:07:46.630 99.50000% : 81466.289us 00:07:46.630 99.90000% : 81466.289us 00:07:46.630 99.99000% : 85095.975us 00:07:46.630 99.99900% : 85095.975us 00:07:46.630 99.99990% : 85095.975us 00:07:46.630 99.99999% : 85095.975us 00:07:46.630 00:07:46.630 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:46.630 ================================================================================= 00:07:46.630 1.00000% : 13812.972us 00:07:46.630 10.00000% : 14417.920us 00:07:46.630 25.00000% : 14821.218us 00:07:46.630 50.00000% : 15526.991us 00:07:46.630 75.00000% : 16837.711us 00:07:46.630 90.00000% : 18753.378us 00:07:46.630 95.00000% : 20669.046us 00:07:46.630 98.00000% : 43152.935us 00:07:46.630 99.00000% : 79853.095us 00:07:46.630 99.50000% : 85095.975us 00:07:46.630 99.90000% : 89935.557us 00:07:46.630 99.99000% : 89935.557us 00:07:46.630 99.99900% : 89935.557us 00:07:46.630 99.99990% : 89935.557us 00:07:46.630 99.99999% : 89935.557us 00:07:46.630 00:07:46.630 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:46.630 ================================================================================= 00:07:46.630 1.00000% : 13812.972us 00:07:46.630 10.00000% : 14317.095us 00:07:46.630 25.00000% : 14821.218us 00:07:46.630 50.00000% : 15526.991us 00:07:46.630 75.00000% : 16837.711us 00:07:46.630 90.00000% : 18955.028us 00:07:46.630 95.00000% : 20467.397us 00:07:46.630 98.00000% : 41338.092us 00:07:46.630 99.00000% : 83886.080us 00:07:46.630 99.50000% : 89128.960us 00:07:46.630 99.90000% : 90338.855us 00:07:46.630 99.99000% : 90338.855us 00:07:46.630 99.99900% : 90338.855us 00:07:46.630 99.99990% : 90338.855us 00:07:46.630 99.99999% : 90338.855us 00:07:46.630 00:07:46.630 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:46.630 ================================================================================= 00:07:46.630 1.00000% : 13712.148us 00:07:46.630 10.00000% : 14417.920us 00:07:46.630 25.00000% : 14821.218us 00:07:46.630 50.00000% : 15426.166us 00:07:46.630 75.00000% : 16636.062us 00:07:46.630 90.00000% : 18955.028us 00:07:46.630 95.00000% : 20769.871us 00:07:46.630 98.00000% : 31053.982us 00:07:46.630 99.00000% : 84289.378us 00:07:46.630 99.50000% : 85499.274us 00:07:46.630 99.90000% : 89532.258us 00:07:46.630 99.99000% : 89532.258us 00:07:46.630 99.99900% : 89532.258us 00:07:46.630 99.99990% : 89532.258us 00:07:46.630 99.99999% : 89532.258us 00:07:46.630 00:07:46.630 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:46.630 ============================================================================== 00:07:46.630 Range in us Cumulative IO count 00:07:46.630 12098.954 - 12149.366: 0.0525% ( 4) 00:07:46.630 12149.366 - 12199.778: 0.0788% ( 2) 00:07:46.630 12199.778 - 12250.191: 0.0919% ( 1) 00:07:46.630 12250.191 - 12300.603: 0.1313% ( 3) 00:07:46.630 12300.603 - 12351.015: 0.1576% ( 2) 00:07:46.630 12351.015 - 12401.428: 0.1838% ( 2) 00:07:46.630 12401.428 - 12451.840: 0.2101% ( 2) 00:07:46.630 12451.840 - 12502.252: 0.2495% ( 3) 00:07:46.630 12502.252 - 12552.665: 0.2757% ( 2) 00:07:46.630 12552.665 - 12603.077: 0.3151% ( 3) 00:07:46.630 12603.077 - 12653.489: 0.3414% ( 2) 00:07:46.630 12653.489 - 12703.902: 0.3808% ( 3) 00:07:46.630 12703.902 - 12754.314: 0.4070% ( 2) 00:07:46.630 12754.314 - 12804.726: 0.4464% ( 3) 00:07:46.630 12804.726 - 12855.138: 0.4727% ( 2) 00:07:46.630 12855.138 - 12905.551: 0.5121% ( 3) 00:07:46.630 12905.551 - 13006.375: 0.5777% ( 5) 00:07:46.630 13006.375 - 13107.200: 0.6565% ( 6) 00:07:46.630 13107.200 - 13208.025: 0.7484% ( 7) 00:07:46.630 13208.025 - 13308.849: 0.8929% ( 11) 00:07:46.630 13308.849 - 13409.674: 1.0504% ( 12) 00:07:46.630 13409.674 - 13510.498: 1.1686% ( 9) 00:07:46.630 13510.498 - 13611.323: 1.5231% ( 27) 00:07:46.630 13611.323 - 13712.148: 1.8645% ( 26) 00:07:46.630 13712.148 - 13812.972: 2.5998% ( 56) 00:07:46.630 13812.972 - 13913.797: 3.4926% ( 68) 00:07:46.630 13913.797 - 14014.622: 4.9501% ( 111) 00:07:46.630 14014.622 - 14115.446: 6.4338% ( 113) 00:07:46.630 14115.446 - 14216.271: 8.5478% ( 161) 00:07:46.630 14216.271 - 14317.095: 10.5173% ( 150) 00:07:46.630 14317.095 - 14417.920: 12.4737% ( 149) 00:07:46.630 14417.920 - 14518.745: 15.8088% ( 254) 00:07:46.630 14518.745 - 14619.569: 19.0126% ( 244) 00:07:46.630 14619.569 - 14720.394: 22.7547% ( 285) 00:07:46.630 14720.394 - 14821.218: 26.6282% ( 295) 00:07:46.630 14821.218 - 14922.043: 30.4228% ( 289) 00:07:46.630 14922.043 - 15022.868: 34.4013% ( 303) 00:07:46.630 15022.868 - 15123.692: 38.3797% ( 303) 00:07:46.630 15123.692 - 15224.517: 42.0562% ( 280) 00:07:46.630 15224.517 - 15325.342: 46.0478% ( 304) 00:07:46.630 15325.342 - 15426.166: 49.2910% ( 247) 00:07:46.630 15426.166 - 15526.991: 52.2847% ( 228) 00:07:46.630 15526.991 - 15627.815: 54.9107% ( 200) 00:07:46.630 15627.815 - 15728.640: 57.8388% ( 223) 00:07:46.630 15728.640 - 15829.465: 60.1628% ( 177) 00:07:46.630 15829.465 - 15930.289: 62.9333% ( 211) 00:07:46.630 15930.289 - 16031.114: 65.0604% ( 162) 00:07:46.630 16031.114 - 16131.938: 67.5289% ( 188) 00:07:46.630 16131.938 - 16232.763: 69.9317% ( 183) 00:07:46.630 16232.763 - 16333.588: 71.7306% ( 137) 00:07:46.630 16333.588 - 16434.412: 72.7547% ( 78) 00:07:46.630 16434.412 - 16535.237: 73.6213% ( 66) 00:07:46.630 16535.237 - 16636.062: 74.5273% ( 69) 00:07:46.630 16636.062 - 16736.886: 75.7747% ( 95) 00:07:46.630 16736.886 - 16837.711: 77.0221% ( 95) 00:07:46.630 16837.711 - 16938.535: 78.2694% ( 95) 00:07:46.630 16938.535 - 17039.360: 79.2936% ( 78) 00:07:46.630 17039.360 - 17140.185: 80.3703% ( 82) 00:07:46.630 17140.185 - 17241.009: 81.5126% ( 87) 00:07:46.630 17241.009 - 17341.834: 82.2479% ( 56) 00:07:46.630 17341.834 - 17442.658: 82.9175% ( 51) 00:07:46.630 17442.658 - 17543.483: 83.6003% ( 52) 00:07:46.630 17543.483 - 17644.308: 83.9811% ( 29) 00:07:46.630 17644.308 - 17745.132: 84.2962% ( 24) 00:07:46.630 17745.132 - 17845.957: 84.6901% ( 30) 00:07:46.630 17845.957 - 17946.782: 85.1759% ( 37) 00:07:46.630 17946.782 - 18047.606: 85.5436% ( 28) 00:07:46.630 18047.606 - 18148.431: 85.9506% ( 31) 00:07:46.630 18148.431 - 18249.255: 86.5940% ( 49) 00:07:46.630 18249.255 - 18350.080: 87.0798% ( 37) 00:07:46.630 18350.080 - 18450.905: 87.6444% ( 43) 00:07:46.630 18450.905 - 18551.729: 88.2353% ( 45) 00:07:46.630 18551.729 - 18652.554: 88.7211% ( 37) 00:07:46.630 18652.554 - 18753.378: 89.2463% ( 40) 00:07:46.630 18753.378 - 18854.203: 89.5089% ( 20) 00:07:46.630 18854.203 - 18955.028: 89.7715% ( 20) 00:07:46.630 18955.028 - 19055.852: 90.0341% ( 20) 00:07:46.630 19055.852 - 19156.677: 90.2836% ( 19) 00:07:46.630 19156.677 - 19257.502: 90.5331% ( 19) 00:07:46.630 19257.502 - 19358.326: 90.7169% ( 14) 00:07:46.630 19358.326 - 19459.151: 90.9270% ( 16) 00:07:46.630 19459.151 - 19559.975: 91.2290% ( 23) 00:07:46.630 19559.975 - 19660.800: 91.7148% ( 37) 00:07:46.630 19660.800 - 19761.625: 92.2400% ( 40) 00:07:46.630 19761.625 - 19862.449: 92.5814% ( 26) 00:07:46.630 19862.449 - 19963.274: 92.8703% ( 22) 00:07:46.630 19963.274 - 20064.098: 93.2511% ( 29) 00:07:46.630 20064.098 - 20164.923: 93.4611% ( 16) 00:07:46.630 20164.923 - 20265.748: 93.6712% ( 16) 00:07:46.630 20265.748 - 20366.572: 93.8682% ( 15) 00:07:46.630 20366.572 - 20467.397: 94.0520% ( 14) 00:07:46.630 20467.397 - 20568.222: 94.2096% ( 12) 00:07:46.630 20568.222 - 20669.046: 94.3146% ( 8) 00:07:46.630 20669.046 - 20769.871: 94.4722% ( 12) 00:07:46.630 20769.871 - 20870.695: 94.7742% ( 23) 00:07:46.630 20870.695 - 20971.520: 94.8398% ( 5) 00:07:46.630 20971.520 - 21072.345: 94.9055% ( 5) 00:07:46.630 21072.345 - 21173.169: 94.9580% ( 4) 00:07:46.630 21475.643 - 21576.468: 94.9711% ( 1) 00:07:46.630 21576.468 - 21677.292: 95.0236% ( 4) 00:07:46.630 21677.292 - 21778.117: 95.2600% ( 18) 00:07:46.630 21778.117 - 21878.942: 95.4569% ( 15) 00:07:46.630 21878.942 - 21979.766: 95.6014% ( 11) 00:07:46.630 21979.766 - 22080.591: 95.7327% ( 10) 00:07:46.630 22080.591 - 22181.415: 95.9559% ( 17) 00:07:46.630 22181.415 - 22282.240: 96.1134% ( 12) 00:07:46.630 22282.240 - 22383.065: 96.2710% ( 12) 00:07:46.630 22383.065 - 22483.889: 96.3892% ( 9) 00:07:46.631 22483.889 - 22584.714: 96.5074% ( 9) 00:07:46.631 22584.714 - 22685.538: 96.6124% ( 8) 00:07:46.631 22685.538 - 22786.363: 96.6387% ( 2) 00:07:46.631 36700.160 - 36901.809: 96.6518% ( 1) 00:07:46.631 36901.809 - 37103.458: 96.7700% ( 9) 00:07:46.631 37103.458 - 37305.108: 96.8750% ( 8) 00:07:46.631 37305.108 - 37506.757: 96.9800% ( 8) 00:07:46.631 37506.757 - 37708.406: 97.0457% ( 5) 00:07:46.631 37708.406 - 37910.055: 97.1245% ( 6) 00:07:46.631 37910.055 - 38111.705: 97.2033% ( 6) 00:07:46.631 38111.705 - 38313.354: 97.2820% ( 6) 00:07:46.631 38313.354 - 38515.003: 97.3739% ( 7) 00:07:46.631 38515.003 - 38716.652: 97.4659% ( 7) 00:07:46.631 38716.652 - 38918.302: 97.4790% ( 1) 00:07:46.631 44766.129 - 44967.778: 97.5053% ( 2) 00:07:46.631 44967.778 - 45169.428: 97.5972% ( 7) 00:07:46.631 45169.428 - 45371.077: 97.6891% ( 7) 00:07:46.631 45371.077 - 45572.726: 97.8598% ( 13) 00:07:46.631 45572.726 - 45774.375: 97.9779% ( 9) 00:07:46.631 46782.622 - 46984.271: 97.9911% ( 1) 00:07:46.631 46984.271 - 47185.920: 98.0699% ( 6) 00:07:46.631 47185.920 - 47387.569: 98.1355% ( 5) 00:07:46.631 47387.569 - 47589.218: 98.2012% ( 5) 00:07:46.631 47589.218 - 47790.868: 98.2799% ( 6) 00:07:46.631 47790.868 - 47992.517: 98.3193% ( 3) 00:07:46.631 81062.991 - 81466.289: 98.6870% ( 28) 00:07:46.631 83886.080 - 84289.378: 98.8577% ( 13) 00:07:46.631 84289.378 - 84692.677: 99.1597% ( 23) 00:07:46.631 85095.975 - 85499.274: 99.5142% ( 27) 00:07:46.631 85499.274 - 85902.572: 99.6324% ( 9) 00:07:46.631 89128.960 - 89532.258: 99.8030% ( 13) 00:07:46.631 89532.258 - 89935.557: 100.0000% ( 15) 00:07:46.631 00:07:46.631 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:46.631 ============================================================================== 00:07:46.631 Range in us Cumulative IO count 00:07:46.631 10536.172 - 10586.585: 0.0131% ( 1) 00:07:46.631 10586.585 - 10636.997: 0.0525% ( 3) 00:07:46.631 10636.997 - 10687.409: 0.0919% ( 3) 00:07:46.631 10687.409 - 10737.822: 0.1576% ( 5) 00:07:46.631 10737.822 - 10788.234: 0.2757% ( 9) 00:07:46.631 10788.234 - 10838.646: 0.3414% ( 5) 00:07:46.631 10838.646 - 10889.058: 0.3808% ( 3) 00:07:46.631 10889.058 - 10939.471: 0.3939% ( 1) 00:07:46.631 10939.471 - 10989.883: 0.4070% ( 1) 00:07:46.631 10989.883 - 11040.295: 0.4202% ( 1) 00:07:46.631 11040.295 - 11090.708: 0.4333% ( 1) 00:07:46.631 11090.708 - 11141.120: 0.4727% ( 3) 00:07:46.631 11292.357 - 11342.769: 0.5252% ( 4) 00:07:46.631 11342.769 - 11393.182: 0.6434% ( 9) 00:07:46.631 11544.418 - 11594.831: 0.6565% ( 1) 00:07:46.631 11594.831 - 11645.243: 0.6696% ( 1) 00:07:46.631 11645.243 - 11695.655: 0.7090% ( 3) 00:07:46.631 11695.655 - 11746.068: 0.7222% ( 1) 00:07:46.631 11746.068 - 11796.480: 0.7616% ( 3) 00:07:46.631 11846.892 - 11897.305: 0.7878% ( 2) 00:07:46.631 11897.305 - 11947.717: 0.8141% ( 2) 00:07:46.631 11947.717 - 11998.129: 0.8403% ( 2) 00:07:46.631 13006.375 - 13107.200: 0.8535% ( 1) 00:07:46.631 13107.200 - 13208.025: 0.8666% ( 1) 00:07:46.631 13208.025 - 13308.849: 0.9585% ( 7) 00:07:46.631 13308.849 - 13409.674: 1.1817% ( 17) 00:07:46.631 13409.674 - 13510.498: 1.4049% ( 17) 00:07:46.631 13510.498 - 13611.323: 2.1271% ( 55) 00:07:46.631 13611.323 - 13712.148: 2.8361% ( 54) 00:07:46.631 13712.148 - 13812.972: 3.5058% ( 51) 00:07:46.631 13812.972 - 13913.797: 4.1098% ( 46) 00:07:46.631 13913.797 - 14014.622: 5.7117% ( 122) 00:07:46.631 14014.622 - 14115.446: 7.4974% ( 136) 00:07:46.631 14115.446 - 14216.271: 9.3881% ( 144) 00:07:46.631 14216.271 - 14317.095: 11.4627% ( 158) 00:07:46.631 14317.095 - 14417.920: 14.1807% ( 207) 00:07:46.631 14417.920 - 14518.745: 16.9905% ( 214) 00:07:46.631 14518.745 - 14619.569: 19.5116% ( 192) 00:07:46.631 14619.569 - 14720.394: 23.1486% ( 277) 00:07:46.631 14720.394 - 14821.218: 26.5100% ( 256) 00:07:46.631 14821.218 - 14922.043: 30.1996% ( 281) 00:07:46.631 14922.043 - 15022.868: 34.0336% ( 292) 00:07:46.631 15022.868 - 15123.692: 37.5788% ( 270) 00:07:46.631 15123.692 - 15224.517: 41.2815% ( 282) 00:07:46.631 15224.517 - 15325.342: 44.7479% ( 264) 00:07:46.631 15325.342 - 15426.166: 48.5294% ( 288) 00:07:46.631 15426.166 - 15526.991: 52.1140% ( 273) 00:07:46.631 15526.991 - 15627.815: 55.1077% ( 228) 00:07:46.631 15627.815 - 15728.640: 57.6024% ( 190) 00:07:46.631 15728.640 - 15829.465: 59.7952% ( 167) 00:07:46.631 15829.465 - 15930.289: 62.0011% ( 168) 00:07:46.631 15930.289 - 16031.114: 63.9706% ( 150) 00:07:46.631 16031.114 - 16131.938: 65.9270% ( 149) 00:07:46.631 16131.938 - 16232.763: 67.8309% ( 145) 00:07:46.631 16232.763 - 16333.588: 69.6166% ( 136) 00:07:46.631 16333.588 - 16434.412: 71.3761% ( 134) 00:07:46.631 16434.412 - 16535.237: 73.0305% ( 126) 00:07:46.631 16535.237 - 16636.062: 74.3960% ( 104) 00:07:46.631 16636.062 - 16736.886: 75.2626% ( 66) 00:07:46.631 16736.886 - 16837.711: 76.3130% ( 80) 00:07:46.631 16837.711 - 16938.535: 77.6523% ( 102) 00:07:46.631 16938.535 - 17039.360: 78.9653% ( 100) 00:07:46.631 17039.360 - 17140.185: 80.0814% ( 85) 00:07:46.631 17140.185 - 17241.009: 81.0137% ( 71) 00:07:46.631 17241.009 - 17341.834: 81.7489% ( 56) 00:07:46.631 17341.834 - 17442.658: 82.4974% ( 57) 00:07:46.631 17442.658 - 17543.483: 83.0882% ( 45) 00:07:46.631 17543.483 - 17644.308: 83.7579% ( 51) 00:07:46.631 17644.308 - 17745.132: 84.7689% ( 77) 00:07:46.631 17745.132 - 17845.957: 85.7668% ( 76) 00:07:46.631 17845.957 - 17946.782: 86.6597% ( 68) 00:07:46.631 17946.782 - 18047.606: 87.2637% ( 46) 00:07:46.631 18047.606 - 18148.431: 87.9333% ( 51) 00:07:46.631 18148.431 - 18249.255: 88.4585% ( 40) 00:07:46.631 18249.255 - 18350.080: 88.6423% ( 14) 00:07:46.631 18350.080 - 18450.905: 88.9049% ( 20) 00:07:46.631 18450.905 - 18551.729: 89.1413% ( 18) 00:07:46.631 18551.729 - 18652.554: 89.6140% ( 36) 00:07:46.631 18652.554 - 18753.378: 89.9160% ( 23) 00:07:46.631 18753.378 - 18854.203: 90.2574% ( 26) 00:07:46.631 18854.203 - 18955.028: 90.3493% ( 7) 00:07:46.631 18955.028 - 19055.852: 90.4412% ( 7) 00:07:46.631 19055.852 - 19156.677: 90.4674% ( 2) 00:07:46.631 19156.677 - 19257.502: 90.4806% ( 1) 00:07:46.631 19257.502 - 19358.326: 90.6119% ( 10) 00:07:46.631 19358.326 - 19459.151: 90.8220% ( 16) 00:07:46.631 19459.151 - 19559.975: 91.0583% ( 18) 00:07:46.631 19559.975 - 19660.800: 91.3866% ( 25) 00:07:46.631 19660.800 - 19761.625: 91.7936% ( 31) 00:07:46.631 19761.625 - 19862.449: 91.9643% ( 13) 00:07:46.631 19862.449 - 19963.274: 92.1481% ( 14) 00:07:46.631 19963.274 - 20064.098: 92.3976% ( 19) 00:07:46.631 20064.098 - 20164.923: 92.6339% ( 18) 00:07:46.631 20164.923 - 20265.748: 92.8571% ( 17) 00:07:46.631 20265.748 - 20366.572: 93.0804% ( 17) 00:07:46.631 20366.572 - 20467.397: 93.3036% ( 17) 00:07:46.631 20467.397 - 20568.222: 93.5005% ( 15) 00:07:46.631 20568.222 - 20669.046: 93.7106% ( 16) 00:07:46.631 20669.046 - 20769.871: 93.8944% ( 14) 00:07:46.631 20769.871 - 20870.695: 94.1833% ( 22) 00:07:46.631 20870.695 - 20971.520: 94.4722% ( 22) 00:07:46.631 20971.520 - 21072.345: 94.7742% ( 23) 00:07:46.631 21072.345 - 21173.169: 94.9842% ( 16) 00:07:46.631 21173.169 - 21273.994: 95.1287% ( 11) 00:07:46.631 21273.994 - 21374.818: 95.2600% ( 10) 00:07:46.631 21374.818 - 21475.643: 95.3519% ( 7) 00:07:46.631 21475.643 - 21576.468: 95.3913% ( 3) 00:07:46.631 21576.468 - 21677.292: 95.5488% ( 12) 00:07:46.631 21677.292 - 21778.117: 95.6539% ( 8) 00:07:46.631 21778.117 - 21878.942: 95.7458% ( 7) 00:07:46.631 21878.942 - 21979.766: 95.7983% ( 4) 00:07:46.631 22685.538 - 22786.363: 95.9296% ( 10) 00:07:46.631 22786.363 - 22887.188: 96.0609% ( 10) 00:07:46.631 22887.188 - 22988.012: 96.1397% ( 6) 00:07:46.631 22988.012 - 23088.837: 96.1922% ( 4) 00:07:46.631 23088.837 - 23189.662: 96.2447% ( 4) 00:07:46.631 23189.662 - 23290.486: 96.2710% ( 2) 00:07:46.631 23290.486 - 23391.311: 96.2841% ( 1) 00:07:46.631 23391.311 - 23492.135: 96.3235% ( 3) 00:07:46.631 23492.135 - 23592.960: 96.3761% ( 4) 00:07:46.631 23592.960 - 23693.785: 96.4286% ( 4) 00:07:46.631 23693.785 - 23794.609: 96.4942% ( 5) 00:07:46.631 23794.609 - 23895.434: 96.5205% ( 2) 00:07:46.631 23895.434 - 23996.258: 96.5730% ( 4) 00:07:46.631 23996.258 - 24097.083: 96.6387% ( 5) 00:07:46.631 33473.772 - 33675.422: 96.6912% ( 4) 00:07:46.631 33675.422 - 33877.071: 96.7700% ( 6) 00:07:46.631 33877.071 - 34078.720: 96.8487% ( 6) 00:07:46.631 34078.720 - 34280.369: 96.9275% ( 6) 00:07:46.631 34280.369 - 34482.018: 97.0063% ( 6) 00:07:46.631 34482.018 - 34683.668: 97.0851% ( 6) 00:07:46.631 34683.668 - 34885.317: 97.1507% ( 5) 00:07:46.631 34885.317 - 35086.966: 97.2295% ( 6) 00:07:46.631 35086.966 - 35288.615: 97.2952% ( 5) 00:07:46.631 35288.615 - 35490.265: 97.3739% ( 6) 00:07:46.631 35490.265 - 35691.914: 97.4265% ( 4) 00:07:46.631 35691.914 - 35893.563: 97.4790% ( 4) 00:07:46.631 43556.234 - 43757.883: 97.4921% ( 1) 00:07:46.631 43757.883 - 43959.532: 97.5578% ( 5) 00:07:46.631 43959.532 - 44161.182: 97.6234% ( 5) 00:07:46.631 44161.182 - 44362.831: 97.6628% ( 3) 00:07:46.631 44362.831 - 44564.480: 97.7416% ( 6) 00:07:46.631 44564.480 - 44766.129: 97.8072% ( 5) 00:07:46.631 44766.129 - 44967.778: 97.8860% ( 6) 00:07:46.631 44967.778 - 45169.428: 97.9648% ( 6) 00:07:46.631 45169.428 - 45371.077: 98.0305% ( 5) 00:07:46.631 45371.077 - 45572.726: 98.1092% ( 6) 00:07:46.631 45572.726 - 45774.375: 98.1749% ( 5) 00:07:46.631 45774.375 - 45976.025: 98.2537% ( 6) 00:07:46.631 45976.025 - 46177.674: 98.3193% ( 5) 00:07:46.631 75416.812 - 75820.111: 98.3718% ( 4) 00:07:46.631 75820.111 - 76223.409: 98.6476% ( 21) 00:07:46.631 76223.409 - 76626.708: 98.6607% ( 1) 00:07:46.631 78643.200 - 79046.498: 98.7132% ( 4) 00:07:46.631 79046.498 - 79449.797: 99.1597% ( 34) 00:07:46.631 80256.394 - 80659.692: 99.4879% ( 25) 00:07:46.631 80659.692 - 81062.991: 99.6061% ( 9) 00:07:46.631 81062.991 - 81466.289: 99.6586% ( 4) 00:07:46.631 84289.378 - 84692.677: 99.8030% ( 11) 00:07:46.631 84692.677 - 85095.975: 100.0000% ( 15) 00:07:46.631 00:07:46.632 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:46.632 ============================================================================== 00:07:46.632 Range in us Cumulative IO count 00:07:46.632 9628.751 - 9679.163: 0.0131% ( 1) 00:07:46.632 9679.163 - 9729.575: 0.0919% ( 6) 00:07:46.632 9729.575 - 9779.988: 0.1576% ( 5) 00:07:46.632 9779.988 - 9830.400: 0.2101% ( 4) 00:07:46.632 9830.400 - 9880.812: 0.2757% ( 5) 00:07:46.632 9880.812 - 9931.225: 0.3545% ( 6) 00:07:46.632 9931.225 - 9981.637: 0.4333% ( 6) 00:07:46.632 9981.637 - 10032.049: 0.5121% ( 6) 00:07:46.632 10032.049 - 10082.462: 0.5909% ( 6) 00:07:46.632 10082.462 - 10132.874: 0.6959% ( 8) 00:07:46.632 10132.874 - 10183.286: 0.7747% ( 6) 00:07:46.632 10183.286 - 10233.698: 0.8141% ( 3) 00:07:46.632 10233.698 - 10284.111: 0.8403% ( 2) 00:07:46.632 13308.849 - 13409.674: 0.8666% ( 2) 00:07:46.632 13409.674 - 13510.498: 1.2211% ( 27) 00:07:46.632 13510.498 - 13611.323: 1.9170% ( 53) 00:07:46.632 13611.323 - 13712.148: 2.9412% ( 78) 00:07:46.632 13712.148 - 13812.972: 3.7553% ( 62) 00:07:46.632 13812.972 - 13913.797: 4.5693% ( 62) 00:07:46.632 13913.797 - 14014.622: 5.4884% ( 70) 00:07:46.632 14014.622 - 14115.446: 6.4863% ( 76) 00:07:46.632 14115.446 - 14216.271: 8.1014% ( 123) 00:07:46.632 14216.271 - 14317.095: 10.0578% ( 149) 00:07:46.632 14317.095 - 14417.920: 12.0011% ( 148) 00:07:46.632 14417.920 - 14518.745: 14.5746% ( 196) 00:07:46.632 14518.745 - 14619.569: 17.3451% ( 211) 00:07:46.632 14619.569 - 14720.394: 20.4832% ( 239) 00:07:46.632 14720.394 - 14821.218: 24.7243% ( 323) 00:07:46.632 14821.218 - 14922.043: 28.5583% ( 292) 00:07:46.632 14922.043 - 15022.868: 33.0226% ( 340) 00:07:46.632 15022.868 - 15123.692: 37.3030% ( 326) 00:07:46.632 15123.692 - 15224.517: 41.9512% ( 354) 00:07:46.632 15224.517 - 15325.342: 45.9559% ( 305) 00:07:46.632 15325.342 - 15426.166: 48.8577% ( 221) 00:07:46.632 15426.166 - 15526.991: 51.0504% ( 167) 00:07:46.632 15526.991 - 15627.815: 54.0179% ( 226) 00:07:46.632 15627.815 - 15728.640: 56.5257% ( 191) 00:07:46.632 15728.640 - 15829.465: 58.8104% ( 174) 00:07:46.632 15829.465 - 15930.289: 61.5809% ( 211) 00:07:46.632 15930.289 - 16031.114: 64.4433% ( 218) 00:07:46.632 16031.114 - 16131.938: 66.8855% ( 186) 00:07:46.632 16131.938 - 16232.763: 69.3015% ( 184) 00:07:46.632 16232.763 - 16333.588: 71.3892% ( 159) 00:07:46.632 16333.588 - 16434.412: 72.6234% ( 94) 00:07:46.632 16434.412 - 16535.237: 73.5951% ( 74) 00:07:46.632 16535.237 - 16636.062: 74.8030% ( 92) 00:07:46.632 16636.062 - 16736.886: 75.5777% ( 59) 00:07:46.632 16736.886 - 16837.711: 76.4312% ( 65) 00:07:46.632 16837.711 - 16938.535: 77.3634% ( 71) 00:07:46.632 16938.535 - 17039.360: 79.1098% ( 133) 00:07:46.632 17039.360 - 17140.185: 80.1471% ( 79) 00:07:46.632 17140.185 - 17241.009: 81.5126% ( 104) 00:07:46.632 17241.009 - 17341.834: 82.5499% ( 79) 00:07:46.632 17341.834 - 17442.658: 83.8367% ( 98) 00:07:46.632 17442.658 - 17543.483: 84.7164% ( 67) 00:07:46.632 17543.483 - 17644.308: 85.5567% ( 64) 00:07:46.632 17644.308 - 17745.132: 86.2658% ( 54) 00:07:46.632 17745.132 - 17845.957: 86.7384% ( 36) 00:07:46.632 17845.957 - 17946.782: 87.0930% ( 27) 00:07:46.632 17946.782 - 18047.606: 87.2768% ( 14) 00:07:46.632 18047.606 - 18148.431: 87.3818% ( 8) 00:07:46.632 18148.431 - 18249.255: 87.5657% ( 14) 00:07:46.632 18249.255 - 18350.080: 87.8283% ( 20) 00:07:46.632 18350.080 - 18450.905: 88.2353% ( 31) 00:07:46.632 18450.905 - 18551.729: 88.7736% ( 41) 00:07:46.632 18551.729 - 18652.554: 89.0494% ( 21) 00:07:46.632 18652.554 - 18753.378: 89.3908% ( 26) 00:07:46.632 18753.378 - 18854.203: 89.8241% ( 33) 00:07:46.632 18854.203 - 18955.028: 90.1523% ( 25) 00:07:46.632 18955.028 - 19055.852: 90.6644% ( 39) 00:07:46.632 19055.852 - 19156.677: 90.9926% ( 25) 00:07:46.632 19156.677 - 19257.502: 91.2946% ( 23) 00:07:46.632 19257.502 - 19358.326: 91.5704% ( 21) 00:07:46.632 19358.326 - 19459.151: 91.9118% ( 26) 00:07:46.632 19459.151 - 19559.975: 92.1744% ( 20) 00:07:46.632 19559.975 - 19660.800: 92.2400% ( 5) 00:07:46.632 19660.800 - 19761.625: 92.3319% ( 7) 00:07:46.632 19761.625 - 19862.449: 92.5551% ( 17) 00:07:46.632 19862.449 - 19963.274: 92.7784% ( 17) 00:07:46.632 19963.274 - 20064.098: 93.0016% ( 17) 00:07:46.632 20064.098 - 20164.923: 93.2904% ( 22) 00:07:46.632 20164.923 - 20265.748: 93.8550% ( 43) 00:07:46.632 20265.748 - 20366.572: 94.1308% ( 21) 00:07:46.632 20366.572 - 20467.397: 94.3277% ( 15) 00:07:46.632 20467.397 - 20568.222: 94.4984% ( 13) 00:07:46.632 20568.222 - 20669.046: 94.6035% ( 8) 00:07:46.632 20669.046 - 20769.871: 94.7216% ( 9) 00:07:46.632 20769.871 - 20870.695: 94.8529% ( 10) 00:07:46.632 20870.695 - 20971.520: 95.0368% ( 14) 00:07:46.632 20971.520 - 21072.345: 95.1812% ( 11) 00:07:46.632 21072.345 - 21173.169: 95.5620% ( 29) 00:07:46.632 21173.169 - 21273.994: 95.6276% ( 5) 00:07:46.632 21273.994 - 21374.818: 95.7064% ( 6) 00:07:46.632 21374.818 - 21475.643: 95.7589% ( 4) 00:07:46.632 21475.643 - 21576.468: 95.7983% ( 3) 00:07:46.632 23391.311 - 23492.135: 95.8902% ( 7) 00:07:46.632 23492.135 - 23592.960: 95.9428% ( 4) 00:07:46.632 23592.960 - 23693.785: 96.0084% ( 5) 00:07:46.632 23693.785 - 23794.609: 96.0609% ( 4) 00:07:46.632 23794.609 - 23895.434: 96.1266% ( 5) 00:07:46.632 23895.434 - 23996.258: 96.1922% ( 5) 00:07:46.632 23996.258 - 24097.083: 96.2579% ( 5) 00:07:46.632 24097.083 - 24197.908: 96.3235% ( 5) 00:07:46.632 24197.908 - 24298.732: 96.3892% ( 5) 00:07:46.632 24298.732 - 24399.557: 96.4417% ( 4) 00:07:46.632 24399.557 - 24500.382: 96.4942% ( 4) 00:07:46.632 24500.382 - 24601.206: 96.5467% ( 4) 00:07:46.632 24601.206 - 24702.031: 96.6255% ( 6) 00:07:46.632 24702.031 - 24802.855: 96.6387% ( 1) 00:07:46.632 31658.929 - 31860.578: 96.7174% ( 6) 00:07:46.632 31860.578 - 32062.228: 96.7831% ( 5) 00:07:46.632 32062.228 - 32263.877: 96.8619% ( 6) 00:07:46.632 32263.877 - 32465.526: 96.9538% ( 7) 00:07:46.632 32465.526 - 32667.175: 97.0457% ( 7) 00:07:46.632 32667.175 - 32868.825: 97.1245% ( 6) 00:07:46.632 32868.825 - 33070.474: 97.1901% ( 5) 00:07:46.632 33070.474 - 33272.123: 97.2689% ( 6) 00:07:46.632 33272.123 - 33473.772: 97.3346% ( 5) 00:07:46.632 33473.772 - 33675.422: 97.4265% ( 7) 00:07:46.632 33675.422 - 33877.071: 97.4790% ( 4) 00:07:46.632 42144.689 - 42346.338: 97.5446% ( 5) 00:07:46.632 42346.338 - 42547.988: 97.6234% ( 6) 00:07:46.632 42547.988 - 42749.637: 97.7022% ( 6) 00:07:46.632 42749.637 - 42951.286: 97.7941% ( 7) 00:07:46.632 42951.286 - 43152.935: 97.8729% ( 6) 00:07:46.632 43152.935 - 43354.585: 97.9648% ( 7) 00:07:46.632 43354.585 - 43556.234: 98.0436% ( 6) 00:07:46.632 43556.234 - 43757.883: 98.1224% ( 6) 00:07:46.632 43757.883 - 43959.532: 98.2012% ( 6) 00:07:46.632 43959.532 - 44161.182: 98.2931% ( 7) 00:07:46.632 44161.182 - 44362.831: 98.3193% ( 2) 00:07:46.632 76223.409 - 76626.708: 98.3587% ( 3) 00:07:46.632 79046.498 - 79449.797: 98.3981% ( 3) 00:07:46.632 79449.797 - 79853.095: 99.1334% ( 56) 00:07:46.632 79853.095 - 80256.394: 99.1597% ( 2) 00:07:46.632 80659.692 - 81062.991: 99.3435% ( 14) 00:07:46.632 81062.991 - 81466.289: 99.9475% ( 46) 00:07:46.632 81466.289 - 81869.588: 99.9606% ( 1) 00:07:46.632 84692.677 - 85095.975: 100.0000% ( 3) 00:07:46.632 00:07:46.632 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:46.632 ============================================================================== 00:07:46.632 Range in us Cumulative IO count 00:07:46.632 7713.083 - 7763.495: 0.0264% ( 2) 00:07:46.632 7763.495 - 7813.908: 0.0527% ( 2) 00:07:46.632 7813.908 - 7864.320: 0.0791% ( 2) 00:07:46.632 7864.320 - 7914.732: 0.1186% ( 3) 00:07:46.632 7914.732 - 7965.145: 0.1450% ( 2) 00:07:46.632 7965.145 - 8015.557: 0.1714% ( 2) 00:07:46.632 8015.557 - 8065.969: 0.1977% ( 2) 00:07:46.632 8065.969 - 8116.382: 0.2241% ( 2) 00:07:46.632 8116.382 - 8166.794: 0.2636% ( 3) 00:07:46.632 8166.794 - 8217.206: 0.2768% ( 1) 00:07:46.632 8217.206 - 8267.618: 0.3032% ( 2) 00:07:46.632 8267.618 - 8318.031: 0.3427% ( 3) 00:07:46.632 8318.031 - 8368.443: 0.3691% ( 2) 00:07:46.632 8368.443 - 8418.855: 0.3955% ( 2) 00:07:46.632 8418.855 - 8469.268: 0.4086% ( 1) 00:07:46.632 8469.268 - 8519.680: 0.4218% ( 1) 00:07:46.632 8519.680 - 8570.092: 0.4350% ( 1) 00:07:46.632 13409.674 - 13510.498: 0.4614% ( 2) 00:07:46.632 13510.498 - 13611.323: 0.6327% ( 13) 00:07:46.632 13611.323 - 13712.148: 0.8832% ( 19) 00:07:46.632 13712.148 - 13812.972: 1.4764% ( 45) 00:07:46.632 13812.972 - 13913.797: 2.5837% ( 84) 00:07:46.632 13913.797 - 14014.622: 4.0337% ( 110) 00:07:46.632 14014.622 - 14115.446: 5.3388% ( 99) 00:07:46.632 14115.446 - 14216.271: 6.9997% ( 126) 00:07:46.632 14216.271 - 14317.095: 9.2934% ( 174) 00:07:46.632 14317.095 - 14417.920: 11.8640% ( 195) 00:07:46.632 14417.920 - 14518.745: 14.7772% ( 221) 00:07:46.632 14518.745 - 14619.569: 18.1650% ( 257) 00:07:46.632 14619.569 - 14720.394: 22.1461% ( 302) 00:07:46.632 14720.394 - 14821.218: 26.9180% ( 362) 00:07:46.632 14821.218 - 14922.043: 30.9913% ( 309) 00:07:46.632 14922.043 - 15022.868: 34.4714% ( 264) 00:07:46.632 15022.868 - 15123.692: 38.2283% ( 285) 00:07:46.632 15123.692 - 15224.517: 41.9457% ( 282) 00:07:46.632 15224.517 - 15325.342: 45.5708% ( 275) 00:07:46.632 15325.342 - 15426.166: 48.4972% ( 222) 00:07:46.632 15426.166 - 15526.991: 51.3709% ( 218) 00:07:46.632 15526.991 - 15627.815: 55.0092% ( 276) 00:07:46.632 15627.815 - 15728.640: 58.9112% ( 296) 00:07:46.632 15728.640 - 15829.465: 61.4817% ( 195) 00:07:46.632 15829.465 - 15930.289: 64.0390% ( 194) 00:07:46.632 15930.289 - 16031.114: 65.7922% ( 133) 00:07:46.632 16031.114 - 16131.938: 67.2950% ( 114) 00:07:46.632 16131.938 - 16232.763: 68.7582% ( 111) 00:07:46.632 16232.763 - 16333.588: 70.7356% ( 150) 00:07:46.632 16333.588 - 16434.412: 71.9351% ( 91) 00:07:46.632 16434.412 - 16535.237: 72.7261% ( 60) 00:07:46.632 16535.237 - 16636.062: 73.6225% ( 68) 00:07:46.632 16636.062 - 16736.886: 74.6507% ( 78) 00:07:46.633 16736.886 - 16837.711: 75.8107% ( 88) 00:07:46.633 16837.711 - 16938.535: 76.8785% ( 81) 00:07:46.633 16938.535 - 17039.360: 77.7748% ( 68) 00:07:46.633 17039.360 - 17140.185: 78.5921% ( 62) 00:07:46.633 17140.185 - 17241.009: 79.6335% ( 79) 00:07:46.633 17241.009 - 17341.834: 80.6881% ( 80) 00:07:46.633 17341.834 - 17442.658: 81.6768% ( 75) 00:07:46.633 17442.658 - 17543.483: 83.0214% ( 102) 00:07:46.633 17543.483 - 17644.308: 83.9177% ( 68) 00:07:46.633 17644.308 - 17745.132: 84.7219% ( 61) 00:07:46.633 17745.132 - 17845.957: 85.6973% ( 74) 00:07:46.633 17845.957 - 17946.782: 86.6201% ( 70) 00:07:46.633 17946.782 - 18047.606: 87.3056% ( 52) 00:07:46.633 18047.606 - 18148.431: 87.9119% ( 46) 00:07:46.633 18148.431 - 18249.255: 88.3206% ( 31) 00:07:46.633 18249.255 - 18350.080: 88.8479% ( 40) 00:07:46.633 18350.080 - 18450.905: 89.1642% ( 24) 00:07:46.633 18450.905 - 18551.729: 89.4938% ( 25) 00:07:46.633 18551.729 - 18652.554: 89.9947% ( 38) 00:07:46.633 18652.554 - 18753.378: 90.4956% ( 38) 00:07:46.633 18753.378 - 18854.203: 90.9043% ( 31) 00:07:46.633 18854.203 - 18955.028: 91.1679% ( 20) 00:07:46.633 18955.028 - 19055.852: 91.4184% ( 19) 00:07:46.633 19055.852 - 19156.677: 91.6425% ( 17) 00:07:46.633 19156.677 - 19257.502: 92.1566% ( 39) 00:07:46.633 19257.502 - 19358.326: 92.3675% ( 16) 00:07:46.633 19358.326 - 19459.151: 92.5784% ( 16) 00:07:46.633 19459.151 - 19559.975: 92.8421% ( 20) 00:07:46.633 19559.975 - 19660.800: 92.9871% ( 11) 00:07:46.633 19660.800 - 19761.625: 93.1321% ( 11) 00:07:46.633 19761.625 - 19862.449: 93.3562% ( 17) 00:07:46.633 19862.449 - 19963.274: 93.6198% ( 20) 00:07:46.633 19963.274 - 20064.098: 93.9230% ( 23) 00:07:46.633 20064.098 - 20164.923: 94.1735% ( 19) 00:07:46.633 20164.923 - 20265.748: 94.4239% ( 19) 00:07:46.633 20265.748 - 20366.572: 94.6085% ( 14) 00:07:46.633 20366.572 - 20467.397: 94.7535% ( 11) 00:07:46.633 20467.397 - 20568.222: 94.8458% ( 7) 00:07:46.633 20568.222 - 20669.046: 95.0040% ( 12) 00:07:46.633 20669.046 - 20769.871: 95.1358% ( 10) 00:07:46.633 20769.871 - 20870.695: 95.4917% ( 27) 00:07:46.633 20870.695 - 20971.520: 95.6499% ( 12) 00:07:46.633 20971.520 - 21072.345: 95.7158% ( 5) 00:07:46.633 21072.345 - 21173.169: 95.7685% ( 4) 00:07:46.633 21173.169 - 21273.994: 95.7817% ( 1) 00:07:46.633 23693.785 - 23794.609: 95.7949% ( 1) 00:07:46.633 23794.609 - 23895.434: 95.8476% ( 4) 00:07:46.633 23895.434 - 23996.258: 95.9267% ( 6) 00:07:46.633 23996.258 - 24097.083: 95.9794% ( 4) 00:07:46.633 24097.083 - 24197.908: 96.0322% ( 4) 00:07:46.633 24197.908 - 24298.732: 96.0849% ( 4) 00:07:46.633 24298.732 - 24399.557: 96.1376% ( 4) 00:07:46.633 24399.557 - 24500.382: 96.1640% ( 2) 00:07:46.633 24500.382 - 24601.206: 96.1772% ( 1) 00:07:46.633 24601.206 - 24702.031: 96.2035% ( 2) 00:07:46.633 24702.031 - 24802.855: 96.2826% ( 6) 00:07:46.633 24802.855 - 24903.680: 96.3881% ( 8) 00:07:46.633 24903.680 - 25004.505: 96.5199% ( 10) 00:07:46.633 25004.505 - 25105.329: 96.5990% ( 6) 00:07:46.633 25105.329 - 25206.154: 96.6254% ( 2) 00:07:46.633 31255.631 - 31457.280: 96.7045% ( 6) 00:07:46.633 31457.280 - 31658.929: 96.7835% ( 6) 00:07:46.633 31658.929 - 31860.578: 96.8758% ( 7) 00:07:46.633 31860.578 - 32062.228: 96.9681% ( 7) 00:07:46.633 32062.228 - 32263.877: 97.0472% ( 6) 00:07:46.633 32263.877 - 32465.526: 97.1395% ( 7) 00:07:46.633 32465.526 - 32667.175: 97.2186% ( 6) 00:07:46.633 32667.175 - 32868.825: 97.3108% ( 7) 00:07:46.633 32868.825 - 33070.474: 97.4031% ( 7) 00:07:46.633 33070.474 - 33272.123: 97.4690% ( 5) 00:07:46.633 41539.742 - 41741.391: 97.4954% ( 2) 00:07:46.633 41741.391 - 41943.040: 97.5745% ( 6) 00:07:46.633 41943.040 - 42144.689: 97.6404% ( 5) 00:07:46.633 42144.689 - 42346.338: 97.7195% ( 6) 00:07:46.633 42346.338 - 42547.988: 97.7986% ( 6) 00:07:46.633 42547.988 - 42749.637: 97.8777% ( 6) 00:07:46.633 42749.637 - 42951.286: 97.9699% ( 7) 00:07:46.633 42951.286 - 43152.935: 98.0622% ( 7) 00:07:46.633 43152.935 - 43354.585: 98.1413% ( 6) 00:07:46.633 43354.585 - 43556.234: 98.2336% ( 7) 00:07:46.633 43556.234 - 43757.883: 98.3127% ( 6) 00:07:46.633 67754.142 - 68157.440: 98.3259% ( 1) 00:07:46.633 76223.409 - 76626.708: 98.4709% ( 11) 00:07:46.633 76626.708 - 77030.006: 98.6159% ( 11) 00:07:46.633 79449.797 - 79853.095: 99.1432% ( 40) 00:07:46.633 79853.095 - 80256.394: 99.1563% ( 1) 00:07:46.633 81062.991 - 81466.289: 99.3013% ( 11) 00:07:46.633 84289.378 - 84692.677: 99.3804% ( 6) 00:07:46.633 84692.677 - 85095.975: 99.5650% ( 14) 00:07:46.633 85095.975 - 85499.274: 99.6045% ( 3) 00:07:46.633 88725.662 - 89128.960: 99.6309% ( 2) 00:07:46.633 89128.960 - 89532.258: 99.8286% ( 15) 00:07:46.633 89532.258 - 89935.557: 100.0000% ( 13) 00:07:46.633 00:07:46.633 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:46.633 ============================================================================== 00:07:46.633 Range in us Cumulative IO count 00:07:46.633 13208.025 - 13308.849: 0.0132% ( 1) 00:07:46.633 13308.849 - 13409.674: 0.1059% ( 7) 00:07:46.633 13409.674 - 13510.498: 0.2516% ( 11) 00:07:46.633 13510.498 - 13611.323: 0.5297% ( 21) 00:07:46.633 13611.323 - 13712.148: 0.9931% ( 35) 00:07:46.633 13712.148 - 13812.972: 1.4831% ( 37) 00:07:46.633 13812.972 - 13913.797: 2.5026% ( 77) 00:07:46.633 13913.797 - 14014.622: 3.8930% ( 105) 00:07:46.633 14014.622 - 14115.446: 6.0117% ( 160) 00:07:46.633 14115.446 - 14216.271: 8.8718% ( 216) 00:07:46.633 14216.271 - 14317.095: 11.3480% ( 187) 00:07:46.633 14317.095 - 14417.920: 13.6520% ( 174) 00:07:46.633 14417.920 - 14518.745: 16.1944% ( 192) 00:07:46.633 14518.745 - 14619.569: 18.5249% ( 176) 00:07:46.633 14619.569 - 14720.394: 21.6764% ( 238) 00:07:46.633 14720.394 - 14821.218: 25.7812% ( 310) 00:07:46.633 14821.218 - 14922.043: 30.4423% ( 352) 00:07:46.633 14922.043 - 15022.868: 34.9576% ( 341) 00:07:46.633 15022.868 - 15123.692: 39.2479% ( 324) 00:07:46.633 15123.692 - 15224.517: 42.5053% ( 246) 00:07:46.633 15224.517 - 15325.342: 45.5244% ( 228) 00:07:46.633 15325.342 - 15426.166: 48.3183% ( 211) 00:07:46.633 15426.166 - 15526.991: 50.9666% ( 200) 00:07:46.633 15526.991 - 15627.815: 54.1578% ( 241) 00:07:46.633 15627.815 - 15728.640: 57.2961% ( 237) 00:07:46.633 15728.640 - 15829.465: 60.4211% ( 236) 00:07:46.633 15829.465 - 15930.289: 62.8708% ( 185) 00:07:46.633 15930.289 - 16031.114: 64.8305% ( 148) 00:07:46.633 16031.114 - 16131.938: 66.1811% ( 102) 00:07:46.633 16131.938 - 16232.763: 67.9158% ( 131) 00:07:46.633 16232.763 - 16333.588: 69.9947% ( 157) 00:07:46.633 16333.588 - 16434.412: 71.5704% ( 119) 00:07:46.633 16434.412 - 16535.237: 72.7622% ( 90) 00:07:46.633 16535.237 - 16636.062: 73.7818% ( 77) 00:07:46.633 16636.062 - 16736.886: 74.8941% ( 84) 00:07:46.633 16736.886 - 16837.711: 76.0328% ( 86) 00:07:46.633 16837.711 - 16938.535: 77.2775% ( 94) 00:07:46.633 16938.535 - 17039.360: 78.7738% ( 113) 00:07:46.633 17039.360 - 17140.185: 79.7669% ( 75) 00:07:46.633 17140.185 - 17241.009: 80.9190% ( 87) 00:07:46.633 17241.009 - 17341.834: 82.0445% ( 85) 00:07:46.633 17341.834 - 17442.658: 82.5477% ( 38) 00:07:46.633 17442.658 - 17543.483: 82.8787% ( 25) 00:07:46.633 17543.483 - 17644.308: 83.3422% ( 35) 00:07:46.633 17644.308 - 17745.132: 83.8586% ( 39) 00:07:46.633 17745.132 - 17845.957: 84.4280% ( 43) 00:07:46.633 17845.957 - 17946.782: 84.9047% ( 36) 00:07:46.633 17946.782 - 18047.606: 85.6197% ( 54) 00:07:46.633 18047.606 - 18148.431: 86.5069% ( 67) 00:07:46.633 18148.431 - 18249.255: 87.3146% ( 61) 00:07:46.633 18249.255 - 18350.080: 87.7516% ( 33) 00:07:46.633 18350.080 - 18450.905: 88.0959% ( 26) 00:07:46.633 18450.905 - 18551.729: 88.3210% ( 17) 00:07:46.633 18551.729 - 18652.554: 88.6785% ( 27) 00:07:46.633 18652.554 - 18753.378: 88.9036% ( 17) 00:07:46.633 18753.378 - 18854.203: 89.2744% ( 28) 00:07:46.633 18854.203 - 18955.028: 90.0689% ( 60) 00:07:46.633 18955.028 - 19055.852: 90.8369% ( 58) 00:07:46.633 19055.852 - 19156.677: 91.3400% ( 38) 00:07:46.633 19156.677 - 19257.502: 91.8167% ( 36) 00:07:46.633 19257.502 - 19358.326: 92.2007% ( 29) 00:07:46.633 19358.326 - 19459.151: 92.6774% ( 36) 00:07:46.633 19459.151 - 19559.975: 93.1276% ( 34) 00:07:46.633 19559.975 - 19660.800: 93.4322% ( 23) 00:07:46.633 19660.800 - 19761.625: 93.9486% ( 39) 00:07:46.633 19761.625 - 19862.449: 94.2399% ( 22) 00:07:46.633 19862.449 - 19963.274: 94.5710% ( 25) 00:07:46.633 19963.274 - 20064.098: 94.7299% ( 12) 00:07:46.633 20064.098 - 20164.923: 94.7961% ( 5) 00:07:46.633 20164.923 - 20265.748: 94.8755% ( 6) 00:07:46.633 20265.748 - 20366.572: 94.9815% ( 8) 00:07:46.633 20366.572 - 20467.397: 95.0477% ( 5) 00:07:46.633 20467.397 - 20568.222: 95.1801% ( 10) 00:07:46.633 20568.222 - 20669.046: 95.5641% ( 29) 00:07:46.633 20669.046 - 20769.871: 95.6568% ( 7) 00:07:46.633 20769.871 - 20870.695: 95.7230% ( 5) 00:07:46.633 20870.695 - 20971.520: 95.7627% ( 3) 00:07:46.633 23290.486 - 23391.311: 95.7760% ( 1) 00:07:46.633 23391.311 - 23492.135: 95.7892% ( 1) 00:07:46.633 23492.135 - 23592.960: 95.8554% ( 5) 00:07:46.633 23592.960 - 23693.785: 95.9481% ( 7) 00:07:46.633 23693.785 - 23794.609: 96.0540% ( 8) 00:07:46.633 23794.609 - 23895.434: 96.1335% ( 6) 00:07:46.633 23895.434 - 23996.258: 96.2262% ( 7) 00:07:46.633 23996.258 - 24097.083: 96.3321% ( 8) 00:07:46.633 24097.083 - 24197.908: 96.3851% ( 4) 00:07:46.633 24197.908 - 24298.732: 96.4380% ( 4) 00:07:46.633 24298.732 - 24399.557: 96.4910% ( 4) 00:07:46.633 24399.557 - 24500.382: 96.5175% ( 2) 00:07:46.633 24500.382 - 24601.206: 96.5704% ( 4) 00:07:46.633 24601.206 - 24702.031: 96.6102% ( 3) 00:07:46.633 30045.735 - 30247.385: 96.6764% ( 5) 00:07:46.633 30247.385 - 30449.034: 96.7691% ( 7) 00:07:46.633 30449.034 - 30650.683: 96.8618% ( 7) 00:07:46.633 30650.683 - 30852.332: 96.9544% ( 7) 00:07:46.633 30852.332 - 31053.982: 97.0471% ( 7) 00:07:46.633 31053.982 - 31255.631: 97.1398% ( 7) 00:07:46.633 31255.631 - 31457.280: 97.2325% ( 7) 00:07:46.633 31457.280 - 31658.929: 97.3252% ( 7) 00:07:46.633 31658.929 - 31860.578: 97.4179% ( 7) 00:07:46.633 31860.578 - 32062.228: 97.4576% ( 3) 00:07:46.634 39926.548 - 40128.197: 97.4974% ( 3) 00:07:46.634 40128.197 - 40329.846: 97.5768% ( 6) 00:07:46.634 40329.846 - 40531.495: 97.6695% ( 7) 00:07:46.634 40531.495 - 40733.145: 97.7489% ( 6) 00:07:46.634 40733.145 - 40934.794: 97.8284% ( 6) 00:07:46.634 40934.794 - 41136.443: 97.9211% ( 7) 00:07:46.634 41136.443 - 41338.092: 98.0138% ( 7) 00:07:46.634 41338.092 - 41539.742: 98.0932% ( 6) 00:07:46.634 41539.742 - 41741.391: 98.1727% ( 6) 00:07:46.634 41741.391 - 41943.040: 98.2654% ( 7) 00:07:46.634 41943.040 - 42144.689: 98.3051% ( 3) 00:07:46.634 80256.394 - 80659.692: 98.8480% ( 41) 00:07:46.634 83482.782 - 83886.080: 99.1261% ( 21) 00:07:46.634 83886.080 - 84289.378: 99.1525% ( 2) 00:07:46.634 84692.677 - 85095.975: 99.4439% ( 22) 00:07:46.634 85095.975 - 85499.274: 99.4571% ( 1) 00:07:46.634 88725.662 - 89128.960: 99.5365% ( 6) 00:07:46.634 89128.960 - 89532.258: 99.6160% ( 6) 00:07:46.634 89532.258 - 89935.557: 99.8279% ( 16) 00:07:46.634 89935.557 - 90338.855: 100.0000% ( 13) 00:07:46.634 00:07:46.634 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:46.634 ============================================================================== 00:07:46.634 Range in us Cumulative IO count 00:07:46.634 13107.200 - 13208.025: 0.0263% ( 2) 00:07:46.634 13208.025 - 13308.849: 0.2101% ( 14) 00:07:46.634 13308.849 - 13409.674: 0.3939% ( 14) 00:07:46.634 13409.674 - 13510.498: 0.6171% ( 17) 00:07:46.634 13510.498 - 13611.323: 0.8666% ( 19) 00:07:46.634 13611.323 - 13712.148: 1.1817% ( 24) 00:07:46.634 13712.148 - 13812.972: 1.6544% ( 36) 00:07:46.634 13812.972 - 13913.797: 2.5341% ( 67) 00:07:46.634 13913.797 - 14014.622: 3.9128% ( 105) 00:07:46.634 14014.622 - 14115.446: 5.5541% ( 125) 00:07:46.634 14115.446 - 14216.271: 7.4842% ( 147) 00:07:46.634 14216.271 - 14317.095: 9.7558% ( 173) 00:07:46.634 14317.095 - 14417.920: 12.0930% ( 178) 00:07:46.634 14417.920 - 14518.745: 14.8109% ( 207) 00:07:46.634 14518.745 - 14619.569: 17.7521% ( 224) 00:07:46.634 14619.569 - 14720.394: 21.2841% ( 269) 00:07:46.634 14720.394 - 14821.218: 25.0788% ( 289) 00:07:46.634 14821.218 - 14922.043: 29.4512% ( 333) 00:07:46.634 14922.043 - 15022.868: 34.0074% ( 347) 00:07:46.634 15022.868 - 15123.692: 39.2726% ( 401) 00:07:46.634 15123.692 - 15224.517: 44.3015% ( 383) 00:07:46.634 15224.517 - 15325.342: 48.0173% ( 283) 00:07:46.634 15325.342 - 15426.166: 51.5100% ( 266) 00:07:46.634 15426.166 - 15526.991: 54.1098% ( 198) 00:07:46.634 15526.991 - 15627.815: 56.6570% ( 194) 00:07:46.634 15627.815 - 15728.640: 59.0205% ( 180) 00:07:46.634 15728.640 - 15829.465: 61.7253% ( 206) 00:07:46.634 15829.465 - 15930.289: 63.7736% ( 156) 00:07:46.634 15930.289 - 16031.114: 66.4916% ( 207) 00:07:46.634 16031.114 - 16131.938: 67.9359% ( 110) 00:07:46.634 16131.938 - 16232.763: 69.7085% ( 135) 00:07:46.634 16232.763 - 16333.588: 71.4548% ( 133) 00:07:46.634 16333.588 - 16434.412: 72.9911% ( 117) 00:07:46.634 16434.412 - 16535.237: 74.6849% ( 129) 00:07:46.634 16535.237 - 16636.062: 75.7878% ( 84) 00:07:46.634 16636.062 - 16736.886: 76.6282% ( 64) 00:07:46.634 16736.886 - 16837.711: 77.5210% ( 68) 00:07:46.634 16837.711 - 16938.535: 78.1250% ( 46) 00:07:46.634 16938.535 - 17039.360: 78.9128% ( 60) 00:07:46.634 17039.360 - 17140.185: 79.8057% ( 68) 00:07:46.634 17140.185 - 17241.009: 80.6985% ( 68) 00:07:46.634 17241.009 - 17341.834: 81.3944% ( 53) 00:07:46.634 17341.834 - 17442.658: 82.1560% ( 58) 00:07:46.634 17442.658 - 17543.483: 82.7075% ( 42) 00:07:46.634 17543.483 - 17644.308: 83.3640% ( 50) 00:07:46.634 17644.308 - 17745.132: 83.9548% ( 45) 00:07:46.634 17745.132 - 17845.957: 84.5982% ( 49) 00:07:46.634 17845.957 - 17946.782: 85.2941% ( 53) 00:07:46.634 17946.782 - 18047.606: 86.0294% ( 56) 00:07:46.634 18047.606 - 18148.431: 86.4758% ( 34) 00:07:46.634 18148.431 - 18249.255: 86.8697% ( 30) 00:07:46.634 18249.255 - 18350.080: 87.2768% ( 31) 00:07:46.634 18350.080 - 18450.905: 87.8020% ( 40) 00:07:46.634 18450.905 - 18551.729: 88.2090% ( 31) 00:07:46.634 18551.729 - 18652.554: 88.9312% ( 55) 00:07:46.634 18652.554 - 18753.378: 89.3251% ( 30) 00:07:46.634 18753.378 - 18854.203: 89.7190% ( 30) 00:07:46.634 18854.203 - 18955.028: 90.3493% ( 48) 00:07:46.634 18955.028 - 19055.852: 90.5331% ( 14) 00:07:46.634 19055.852 - 19156.677: 90.7563% ( 17) 00:07:46.634 19156.677 - 19257.502: 90.8745% ( 9) 00:07:46.634 19257.502 - 19358.326: 91.0058% ( 10) 00:07:46.634 19358.326 - 19459.151: 91.1765% ( 13) 00:07:46.634 19459.151 - 19559.975: 91.3603% ( 14) 00:07:46.634 19559.975 - 19660.800: 91.7017% ( 26) 00:07:46.634 19660.800 - 19761.625: 91.8855% ( 14) 00:07:46.634 19761.625 - 19862.449: 92.0299% ( 11) 00:07:46.634 19862.449 - 19963.274: 92.1744% ( 11) 00:07:46.634 19963.274 - 20064.098: 92.3057% ( 10) 00:07:46.634 20064.098 - 20164.923: 92.3976% ( 7) 00:07:46.634 20164.923 - 20265.748: 92.5945% ( 15) 00:07:46.634 20265.748 - 20366.572: 92.7521% ( 12) 00:07:46.634 20366.572 - 20467.397: 93.1066% ( 27) 00:07:46.634 20467.397 - 20568.222: 93.7500% ( 49) 00:07:46.634 20568.222 - 20669.046: 94.3671% ( 47) 00:07:46.634 20669.046 - 20769.871: 95.0499% ( 52) 00:07:46.634 20769.871 - 20870.695: 95.4569% ( 31) 00:07:46.634 20870.695 - 20971.520: 95.7852% ( 25) 00:07:46.634 20971.520 - 21072.345: 95.9953% ( 16) 00:07:46.634 21072.345 - 21173.169: 96.0478% ( 4) 00:07:46.634 21173.169 - 21273.994: 96.1003% ( 4) 00:07:46.634 21273.994 - 21374.818: 96.1791% ( 6) 00:07:46.634 21374.818 - 21475.643: 96.2316% ( 4) 00:07:46.634 21475.643 - 21576.468: 96.2710% ( 3) 00:07:46.634 21576.468 - 21677.292: 96.3235% ( 4) 00:07:46.634 21677.292 - 21778.117: 96.3629% ( 3) 00:07:46.634 21778.117 - 21878.942: 96.4154% ( 4) 00:07:46.634 21878.942 - 21979.766: 96.4548% ( 3) 00:07:46.634 21979.766 - 22080.591: 96.4942% ( 3) 00:07:46.634 22080.591 - 22181.415: 96.5336% ( 3) 00:07:46.634 22181.415 - 22282.240: 96.5730% ( 3) 00:07:46.634 22282.240 - 22383.065: 96.6255% ( 4) 00:07:46.634 22383.065 - 22483.889: 96.6387% ( 1) 00:07:46.634 22887.188 - 22988.012: 96.6518% ( 1) 00:07:46.634 22988.012 - 23088.837: 96.7174% ( 5) 00:07:46.634 23088.837 - 23189.662: 96.7962% ( 6) 00:07:46.634 23189.662 - 23290.486: 96.8750% ( 6) 00:07:46.634 23290.486 - 23391.311: 97.0063% ( 10) 00:07:46.634 23391.311 - 23492.135: 97.1507% ( 11) 00:07:46.634 23492.135 - 23592.960: 97.2295% ( 6) 00:07:46.634 23592.960 - 23693.785: 97.2820% ( 4) 00:07:46.634 23693.785 - 23794.609: 97.3477% ( 5) 00:07:46.634 23794.609 - 23895.434: 97.4133% ( 5) 00:07:46.634 23895.434 - 23996.258: 97.4659% ( 4) 00:07:46.634 23996.258 - 24097.083: 97.4790% ( 1) 00:07:46.634 29642.437 - 29844.086: 97.5315% ( 4) 00:07:46.634 29844.086 - 30045.735: 97.6234% ( 7) 00:07:46.634 30045.735 - 30247.385: 97.7153% ( 7) 00:07:46.634 30247.385 - 30449.034: 97.8072% ( 7) 00:07:46.634 30449.034 - 30650.683: 97.8860% ( 6) 00:07:46.634 30650.683 - 30852.332: 97.9911% ( 8) 00:07:46.634 30852.332 - 31053.982: 98.0830% ( 7) 00:07:46.634 31053.982 - 31255.631: 98.1618% ( 6) 00:07:46.634 31255.631 - 31457.280: 98.2537% ( 7) 00:07:46.634 31457.280 - 31658.929: 98.3193% ( 5) 00:07:46.634 80659.692 - 81062.991: 98.5688% ( 19) 00:07:46.634 83886.080 - 84289.378: 99.1597% ( 45) 00:07:46.634 84692.677 - 85095.975: 99.1859% ( 2) 00:07:46.634 85095.975 - 85499.274: 99.7374% ( 42) 00:07:46.634 85499.274 - 85902.572: 99.7505% ( 1) 00:07:46.634 88725.662 - 89128.960: 99.7637% ( 1) 00:07:46.634 89128.960 - 89532.258: 100.0000% ( 18) 00:07:46.634 00:07:46.634 12:33:46 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:46.634 00:07:46.634 real 0m2.637s 00:07:46.634 user 0m2.248s 00:07:46.634 sys 0m0.261s 00:07:46.634 12:33:46 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.634 ************************************ 00:07:46.634 END TEST nvme_perf 00:07:46.634 ************************************ 00:07:46.634 12:33:46 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:46.634 12:33:46 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:46.634 12:33:46 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:46.634 12:33:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.634 12:33:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.634 ************************************ 00:07:46.634 START TEST nvme_hello_world 00:07:46.634 ************************************ 00:07:46.635 12:33:46 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:46.895 Initializing NVMe Controllers 00:07:46.895 Attached to 0000:00:13.0 00:07:46.895 Namespace ID: 1 size: 1GB 00:07:46.895 Attached to 0000:00:10.0 00:07:46.895 Namespace ID: 1 size: 6GB 00:07:46.895 Attached to 0000:00:11.0 00:07:46.895 Namespace ID: 1 size: 5GB 00:07:46.895 Attached to 0000:00:12.0 00:07:46.896 Namespace ID: 1 size: 4GB 00:07:46.896 Namespace ID: 2 size: 4GB 00:07:46.896 Namespace ID: 3 size: 4GB 00:07:46.896 Initialization complete. 00:07:46.896 INFO: using host memory buffer for IO 00:07:46.896 Hello world! 00:07:46.896 INFO: using host memory buffer for IO 00:07:46.896 Hello world! 00:07:46.896 INFO: using host memory buffer for IO 00:07:46.896 Hello world! 00:07:46.896 INFO: using host memory buffer for IO 00:07:46.896 Hello world! 00:07:46.896 INFO: using host memory buffer for IO 00:07:46.896 Hello world! 00:07:46.896 INFO: using host memory buffer for IO 00:07:46.896 Hello world! 00:07:46.896 00:07:46.896 real 0m0.250s 00:07:46.896 user 0m0.086s 00:07:46.896 sys 0m0.114s 00:07:46.896 12:33:46 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.896 ************************************ 00:07:46.896 12:33:46 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:46.896 END TEST nvme_hello_world 00:07:46.896 ************************************ 00:07:46.896 12:33:46 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:46.896 12:33:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.896 12:33:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.896 12:33:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.896 ************************************ 00:07:46.896 START TEST nvme_sgl 00:07:46.896 ************************************ 00:07:46.896 12:33:46 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:47.157 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:47.157 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:47.157 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:47.157 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:47.157 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:47.157 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:47.157 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:47.157 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:47.157 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:47.157 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:47.157 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:47.157 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:47.157 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:47.157 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:47.157 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:47.157 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:47.157 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:47.157 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:47.157 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:47.157 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:47.157 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:47.418 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:47.418 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:47.418 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:47.418 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:47.418 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:47.418 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:47.418 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:47.418 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:47.418 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:47.418 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:47.418 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:47.418 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:47.418 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:47.418 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:47.418 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:47.418 NVMe Readv/Writev Request test 00:07:47.418 Attached to 0000:00:13.0 00:07:47.418 Attached to 0000:00:10.0 00:07:47.418 Attached to 0000:00:11.0 00:07:47.418 Attached to 0000:00:12.0 00:07:47.418 0000:00:10.0: build_io_request_2 test passed 00:07:47.418 0000:00:10.0: build_io_request_4 test passed 00:07:47.418 0000:00:10.0: build_io_request_5 test passed 00:07:47.418 0000:00:10.0: build_io_request_6 test passed 00:07:47.418 0000:00:10.0: build_io_request_7 test passed 00:07:47.418 0000:00:10.0: build_io_request_10 test passed 00:07:47.418 0000:00:11.0: build_io_request_2 test passed 00:07:47.418 0000:00:11.0: build_io_request_4 test passed 00:07:47.418 0000:00:11.0: build_io_request_5 test passed 00:07:47.418 0000:00:11.0: build_io_request_6 test passed 00:07:47.418 0000:00:11.0: build_io_request_7 test passed 00:07:47.418 0000:00:11.0: build_io_request_10 test passed 00:07:47.418 Cleaning up... 00:07:47.418 00:07:47.418 real 0m0.371s 00:07:47.418 user 0m0.177s 00:07:47.418 sys 0m0.147s 00:07:47.418 12:33:46 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.418 12:33:46 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:47.418 ************************************ 00:07:47.418 END TEST nvme_sgl 00:07:47.418 ************************************ 00:07:47.418 12:33:46 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:47.418 12:33:46 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.418 12:33:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.418 12:33:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.418 ************************************ 00:07:47.418 START TEST nvme_e2edp 00:07:47.418 ************************************ 00:07:47.418 12:33:47 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:47.680 NVMe Write/Read with End-to-End data protection test 00:07:47.680 Attached to 0000:00:13.0 00:07:47.680 Attached to 0000:00:10.0 00:07:47.680 Attached to 0000:00:11.0 00:07:47.680 Attached to 0000:00:12.0 00:07:47.680 Cleaning up... 00:07:47.680 00:07:47.680 real 0m0.284s 00:07:47.680 user 0m0.095s 00:07:47.680 sys 0m0.142s 00:07:47.680 12:33:47 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.680 ************************************ 00:07:47.680 END TEST nvme_e2edp 00:07:47.680 12:33:47 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:47.680 ************************************ 00:07:47.680 12:33:47 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:47.680 12:33:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.680 12:33:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.680 12:33:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.680 ************************************ 00:07:47.680 START TEST nvme_reserve 00:07:47.680 ************************************ 00:07:47.680 12:33:47 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:47.941 ===================================================== 00:07:47.941 NVMe Controller at PCI bus 0, device 19, function 0 00:07:47.941 ===================================================== 00:07:47.941 Reservations: Not Supported 00:07:47.941 ===================================================== 00:07:47.941 NVMe Controller at PCI bus 0, device 16, function 0 00:07:47.941 ===================================================== 00:07:47.941 Reservations: Not Supported 00:07:47.941 ===================================================== 00:07:47.941 NVMe Controller at PCI bus 0, device 17, function 0 00:07:47.941 ===================================================== 00:07:47.941 Reservations: Not Supported 00:07:47.941 ===================================================== 00:07:47.941 NVMe Controller at PCI bus 0, device 18, function 0 00:07:47.941 ===================================================== 00:07:47.941 Reservations: Not Supported 00:07:47.941 Reservation test passed 00:07:47.941 00:07:47.941 real 0m0.248s 00:07:47.941 user 0m0.080s 00:07:47.941 sys 0m0.121s 00:07:47.941 12:33:47 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.941 12:33:47 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:47.941 ************************************ 00:07:47.941 END TEST nvme_reserve 00:07:47.941 ************************************ 00:07:47.941 12:33:47 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:47.941 12:33:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.941 12:33:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.941 12:33:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.941 ************************************ 00:07:47.941 START TEST nvme_err_injection 00:07:47.941 ************************************ 00:07:47.941 12:33:47 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:48.203 NVMe Error Injection test 00:07:48.203 Attached to 0000:00:13.0 00:07:48.203 Attached to 0000:00:10.0 00:07:48.203 Attached to 0000:00:11.0 00:07:48.203 Attached to 0000:00:12.0 00:07:48.203 0000:00:10.0: get features failed as expected 00:07:48.203 0000:00:11.0: get features failed as expected 00:07:48.203 0000:00:12.0: get features failed as expected 00:07:48.203 0000:00:13.0: get features failed as expected 00:07:48.203 0000:00:13.0: get features successfully as expected 00:07:48.203 0000:00:10.0: get features successfully as expected 00:07:48.203 0000:00:11.0: get features successfully as expected 00:07:48.203 0000:00:12.0: get features successfully as expected 00:07:48.203 0000:00:12.0: read failed as expected 00:07:48.203 0000:00:13.0: read failed as expected 00:07:48.203 0000:00:10.0: read failed as expected 00:07:48.203 0000:00:11.0: read failed as expected 00:07:48.203 0000:00:12.0: read successfully as expected 00:07:48.203 0000:00:13.0: read successfully as expected 00:07:48.203 0000:00:10.0: read successfully as expected 00:07:48.203 0000:00:11.0: read successfully as expected 00:07:48.203 Cleaning up... 00:07:48.203 00:07:48.203 real 0m0.263s 00:07:48.203 user 0m0.083s 00:07:48.203 sys 0m0.130s 00:07:48.203 12:33:47 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.203 12:33:47 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:48.203 ************************************ 00:07:48.203 END TEST nvme_err_injection 00:07:48.203 ************************************ 00:07:48.464 12:33:47 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:48.464 12:33:47 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:48.464 12:33:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.464 12:33:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.464 ************************************ 00:07:48.464 START TEST nvme_overhead 00:07:48.464 ************************************ 00:07:48.464 12:33:48 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:49.853 Initializing NVMe Controllers 00:07:49.853 Attached to 0000:00:13.0 00:07:49.853 Attached to 0000:00:10.0 00:07:49.853 Attached to 0000:00:11.0 00:07:49.853 Attached to 0000:00:12.0 00:07:49.853 Initialization complete. Launching workers. 00:07:49.853 submit (in ns) avg, min, max = 13125.5, 10175.4, 123112.3 00:07:49.853 complete (in ns) avg, min, max = 8399.2, 7281.5, 320914.6 00:07:49.853 00:07:49.853 Submit histogram 00:07:49.853 ================ 00:07:49.853 Range in us Cumulative Count 00:07:49.853 10.142 - 10.191: 0.0187% ( 1) 00:07:49.853 10.437 - 10.486: 0.0375% ( 1) 00:07:49.853 10.486 - 10.535: 0.0562% ( 1) 00:07:49.853 10.978 - 11.028: 0.0749% ( 1) 00:07:49.853 11.028 - 11.077: 0.2060% ( 7) 00:07:49.853 11.077 - 11.126: 0.9925% ( 42) 00:07:49.853 11.126 - 11.175: 3.5768% ( 138) 00:07:49.853 11.175 - 11.225: 7.6966% ( 220) 00:07:49.853 11.225 - 11.274: 13.1273% ( 290) 00:07:49.853 11.274 - 11.323: 19.4195% ( 336) 00:07:49.853 11.323 - 11.372: 24.6816% ( 281) 00:07:49.853 11.372 - 11.422: 28.8390% ( 222) 00:07:49.853 11.422 - 11.471: 32.0787% ( 173) 00:07:49.853 11.471 - 11.520: 35.0375% ( 158) 00:07:49.853 11.520 - 11.569: 37.5843% ( 136) 00:07:49.853 11.569 - 11.618: 39.8689% ( 122) 00:07:49.853 11.618 - 11.668: 41.9101% ( 109) 00:07:49.853 11.668 - 11.717: 43.5019% ( 85) 00:07:49.853 11.717 - 11.766: 45.1498% ( 88) 00:07:49.853 11.766 - 11.815: 46.5730% ( 76) 00:07:49.853 11.815 - 11.865: 47.3596% ( 42) 00:07:49.853 11.865 - 11.914: 48.2210% ( 46) 00:07:49.853 11.914 - 11.963: 49.0637% ( 45) 00:07:49.853 11.963 - 12.012: 49.8689% ( 43) 00:07:49.853 12.012 - 12.062: 50.5618% ( 37) 00:07:49.853 12.062 - 12.111: 51.3109% ( 40) 00:07:49.853 12.111 - 12.160: 51.8165% ( 27) 00:07:49.853 12.160 - 12.209: 52.3408% ( 28) 00:07:49.853 12.209 - 12.258: 52.9775% ( 34) 00:07:49.853 12.258 - 12.308: 53.6330% ( 35) 00:07:49.853 12.308 - 12.357: 54.1011% ( 25) 00:07:49.853 12.357 - 12.406: 54.5131% ( 22) 00:07:49.853 12.406 - 12.455: 54.9625% ( 24) 00:07:49.853 12.455 - 12.505: 55.4120% ( 24) 00:07:49.853 12.505 - 12.554: 55.6742% ( 14) 00:07:49.853 12.554 - 12.603: 56.1423% ( 25) 00:07:49.853 12.603 - 12.702: 56.8352% ( 37) 00:07:49.853 12.702 - 12.800: 57.6592% ( 44) 00:07:49.853 12.800 - 12.898: 58.1648% ( 27) 00:07:49.853 12.898 - 12.997: 58.8390% ( 36) 00:07:49.853 12.997 - 13.095: 59.4007% ( 30) 00:07:49.853 13.095 - 13.194: 59.8876% ( 26) 00:07:49.853 13.194 - 13.292: 60.4494% ( 30) 00:07:49.853 13.292 - 13.391: 60.9551% ( 27) 00:07:49.853 13.391 - 13.489: 61.6105% ( 35) 00:07:49.853 13.489 - 13.588: 62.8277% ( 65) 00:07:49.853 13.588 - 13.686: 64.2697% ( 77) 00:07:49.853 13.686 - 13.785: 66.7041% ( 130) 00:07:49.853 13.785 - 13.883: 69.3258% ( 140) 00:07:49.853 13.883 - 13.982: 71.8352% ( 134) 00:07:49.853 13.982 - 14.080: 73.9139% ( 111) 00:07:49.853 14.080 - 14.178: 75.5993% ( 90) 00:07:49.853 14.178 - 14.277: 76.7603% ( 62) 00:07:49.853 14.277 - 14.375: 77.8839% ( 60) 00:07:49.853 14.375 - 14.474: 78.6517% ( 41) 00:07:49.853 14.474 - 14.572: 79.4195% ( 41) 00:07:49.853 14.572 - 14.671: 80.2060% ( 42) 00:07:49.853 14.671 - 14.769: 81.2360% ( 55) 00:07:49.853 14.769 - 14.868: 83.0337% ( 96) 00:07:49.853 14.868 - 14.966: 85.2247% ( 117) 00:07:49.853 14.966 - 15.065: 87.3221% ( 112) 00:07:49.853 15.065 - 15.163: 89.0637% ( 93) 00:07:49.853 15.163 - 15.262: 90.5618% ( 80) 00:07:49.853 15.262 - 15.360: 91.4419% ( 47) 00:07:49.853 15.360 - 15.458: 92.2097% ( 41) 00:07:49.853 15.458 - 15.557: 92.7528% ( 29) 00:07:49.853 15.557 - 15.655: 93.1461% ( 21) 00:07:49.853 15.655 - 15.754: 93.4082% ( 14) 00:07:49.853 15.754 - 15.852: 93.5768% ( 9) 00:07:49.853 15.852 - 15.951: 93.7266% ( 8) 00:07:49.853 15.951 - 16.049: 93.7828% ( 3) 00:07:49.853 16.049 - 16.148: 93.8577% ( 4) 00:07:49.853 16.148 - 16.246: 93.9326% ( 4) 00:07:49.853 16.246 - 16.345: 94.0075% ( 4) 00:07:49.853 16.345 - 16.443: 94.0637% ( 3) 00:07:49.853 16.443 - 16.542: 94.0824% ( 1) 00:07:49.853 16.542 - 16.640: 94.1199% ( 2) 00:07:49.853 16.640 - 16.738: 94.1948% ( 4) 00:07:49.853 16.738 - 16.837: 94.3071% ( 6) 00:07:49.853 16.837 - 16.935: 94.3446% ( 2) 00:07:49.853 16.935 - 17.034: 94.4757% ( 7) 00:07:49.853 17.034 - 17.132: 94.6255% ( 8) 00:07:49.853 17.132 - 17.231: 94.8127% ( 10) 00:07:49.853 17.231 - 17.329: 94.9438% ( 7) 00:07:49.853 17.329 - 17.428: 95.0375% ( 5) 00:07:49.853 17.428 - 17.526: 95.1685% ( 7) 00:07:49.853 17.526 - 17.625: 95.2622% ( 5) 00:07:49.853 17.625 - 17.723: 95.4120% ( 8) 00:07:49.853 17.723 - 17.822: 95.6180% ( 11) 00:07:49.853 17.822 - 17.920: 95.6367% ( 1) 00:07:49.853 17.920 - 18.018: 95.8052% ( 9) 00:07:49.853 18.018 - 18.117: 96.0112% ( 11) 00:07:49.853 18.117 - 18.215: 96.1798% ( 9) 00:07:49.853 18.215 - 18.314: 96.3296% ( 8) 00:07:49.853 18.314 - 18.412: 96.4607% ( 7) 00:07:49.853 18.412 - 18.511: 96.5918% ( 7) 00:07:49.853 18.511 - 18.609: 96.6667% ( 4) 00:07:49.853 18.609 - 18.708: 96.7790% ( 6) 00:07:49.853 18.708 - 18.806: 96.9288% ( 8) 00:07:49.853 18.806 - 18.905: 97.0225% ( 5) 00:07:49.853 18.905 - 19.003: 97.1161% ( 5) 00:07:49.854 19.003 - 19.102: 97.1910% ( 4) 00:07:49.854 19.102 - 19.200: 97.3408% ( 8) 00:07:49.854 19.200 - 19.298: 97.3783% ( 2) 00:07:49.854 19.298 - 19.397: 97.4532% ( 4) 00:07:49.854 19.397 - 19.495: 97.5843% ( 7) 00:07:49.854 19.495 - 19.594: 97.6404% ( 3) 00:07:49.854 19.594 - 19.692: 97.6592% ( 1) 00:07:49.854 19.692 - 19.791: 97.7528% ( 5) 00:07:49.854 19.791 - 19.889: 97.9213% ( 9) 00:07:49.854 19.889 - 19.988: 97.9401% ( 1) 00:07:49.854 20.086 - 20.185: 98.0337% ( 5) 00:07:49.854 20.185 - 20.283: 98.1086% ( 4) 00:07:49.854 20.283 - 20.382: 98.1461% ( 2) 00:07:49.854 20.382 - 20.480: 98.2022% ( 3) 00:07:49.854 20.480 - 20.578: 98.3333% ( 7) 00:07:49.854 20.578 - 20.677: 98.4082% ( 4) 00:07:49.854 20.677 - 20.775: 98.5393% ( 7) 00:07:49.854 20.775 - 20.874: 98.5955% ( 3) 00:07:49.854 20.874 - 20.972: 98.6704% ( 4) 00:07:49.854 20.972 - 21.071: 98.7266% ( 3) 00:07:49.854 21.071 - 21.169: 98.7640% ( 2) 00:07:49.854 21.169 - 21.268: 98.8202% ( 3) 00:07:49.854 21.268 - 21.366: 98.8951% ( 4) 00:07:49.854 21.366 - 21.465: 98.9139% ( 1) 00:07:49.854 21.563 - 21.662: 98.9513% ( 2) 00:07:49.854 21.662 - 21.760: 98.9888% ( 2) 00:07:49.854 21.858 - 21.957: 99.0449% ( 3) 00:07:49.854 21.957 - 22.055: 99.0637% ( 1) 00:07:49.854 22.055 - 22.154: 99.1386% ( 4) 00:07:49.854 22.154 - 22.252: 99.1573% ( 1) 00:07:49.854 22.252 - 22.351: 99.1760% ( 1) 00:07:49.854 22.351 - 22.449: 99.1948% ( 1) 00:07:49.854 22.449 - 22.548: 99.2322% ( 2) 00:07:49.854 22.548 - 22.646: 99.2697% ( 2) 00:07:49.854 22.646 - 22.745: 99.3258% ( 3) 00:07:49.854 22.745 - 22.843: 99.3446% ( 1) 00:07:49.854 22.942 - 23.040: 99.3633% ( 1) 00:07:49.854 23.040 - 23.138: 99.4007% ( 2) 00:07:49.854 23.237 - 23.335: 99.4195% ( 1) 00:07:49.854 23.335 - 23.434: 99.4382% ( 1) 00:07:49.854 23.434 - 23.532: 99.4944% ( 3) 00:07:49.854 23.631 - 23.729: 99.5131% ( 1) 00:07:49.854 23.729 - 23.828: 99.5318% ( 1) 00:07:49.854 23.926 - 24.025: 99.5506% ( 1) 00:07:49.854 24.025 - 24.123: 99.5880% ( 2) 00:07:49.854 24.222 - 24.320: 99.6255% ( 2) 00:07:49.854 24.320 - 24.418: 99.6442% ( 1) 00:07:49.854 24.812 - 24.911: 99.6629% ( 1) 00:07:49.854 25.600 - 25.797: 99.6816% ( 1) 00:07:49.854 26.388 - 26.585: 99.7004% ( 1) 00:07:49.854 27.175 - 27.372: 99.7191% ( 1) 00:07:49.854 28.160 - 28.357: 99.7378% ( 1) 00:07:49.854 28.948 - 29.145: 99.7566% ( 1) 00:07:49.854 33.083 - 33.280: 99.7753% ( 1) 00:07:49.854 35.643 - 35.840: 99.7940% ( 1) 00:07:49.854 38.597 - 38.794: 99.8127% ( 1) 00:07:49.854 42.338 - 42.535: 99.8315% ( 1) 00:07:49.854 44.111 - 44.308: 99.8502% ( 1) 00:07:49.854 44.505 - 44.702: 99.8689% ( 1) 00:07:49.854 45.489 - 45.686: 99.8876% ( 1) 00:07:49.854 48.443 - 48.640: 99.9064% ( 1) 00:07:49.854 53.957 - 54.351: 99.9251% ( 1) 00:07:49.854 59.865 - 60.258: 99.9438% ( 1) 00:07:49.854 90.978 - 91.372: 99.9625% ( 1) 00:07:49.854 107.914 - 108.702: 99.9813% ( 1) 00:07:49.854 122.880 - 123.668: 100.0000% ( 1) 00:07:49.854 00:07:49.854 Complete histogram 00:07:49.854 ================== 00:07:49.854 Range in us Cumulative Count 00:07:49.854 7.237 - 7.286: 0.0375% ( 2) 00:07:49.854 7.286 - 7.335: 0.7491% ( 38) 00:07:49.854 7.335 - 7.385: 4.7004% ( 211) 00:07:49.854 7.385 - 7.434: 12.7528% ( 430) 00:07:49.854 7.434 - 7.483: 22.5281% ( 522) 00:07:49.854 7.483 - 7.532: 31.1798% ( 462) 00:07:49.854 7.532 - 7.582: 37.7528% ( 351) 00:07:49.854 7.582 - 7.631: 42.1161% ( 233) 00:07:49.854 7.631 - 7.680: 45.7491% ( 194) 00:07:49.854 7.680 - 7.729: 48.1461% ( 128) 00:07:49.854 7.729 - 7.778: 49.7753% ( 87) 00:07:49.854 7.778 - 7.828: 51.2921% ( 81) 00:07:49.854 7.828 - 7.877: 52.6030% ( 70) 00:07:49.854 7.877 - 7.926: 54.0075% ( 75) 00:07:49.854 7.926 - 7.975: 55.5056% ( 80) 00:07:49.854 7.975 - 8.025: 56.6667% ( 62) 00:07:49.854 8.025 - 8.074: 57.8464% ( 63) 00:07:49.854 8.074 - 8.123: 59.5318% ( 90) 00:07:49.854 8.123 - 8.172: 61.0112% ( 79) 00:07:49.854 8.172 - 8.222: 62.5468% ( 82) 00:07:49.854 8.222 - 8.271: 64.7378% ( 117) 00:07:49.854 8.271 - 8.320: 67.3408% ( 139) 00:07:49.854 8.320 - 8.369: 70.7865% ( 184) 00:07:49.854 8.369 - 8.418: 73.5206% ( 146) 00:07:49.854 8.418 - 8.468: 76.0112% ( 133) 00:07:49.854 8.468 - 8.517: 78.2397% ( 119) 00:07:49.854 8.517 - 8.566: 79.6442% ( 75) 00:07:49.854 8.566 - 8.615: 80.5618% ( 49) 00:07:49.854 8.615 - 8.665: 81.3296% ( 41) 00:07:49.854 8.665 - 8.714: 81.8914% ( 30) 00:07:49.854 8.714 - 8.763: 82.1723% ( 15) 00:07:49.854 8.763 - 8.812: 82.2846% ( 6) 00:07:49.854 8.812 - 8.862: 82.3783% ( 5) 00:07:49.854 8.862 - 8.911: 82.4906% ( 6) 00:07:49.854 8.911 - 8.960: 82.5843% ( 5) 00:07:49.854 8.960 - 9.009: 82.6217% ( 2) 00:07:49.854 9.009 - 9.058: 82.6779% ( 3) 00:07:49.854 9.157 - 9.206: 82.7341% ( 3) 00:07:49.854 9.206 - 9.255: 82.8464% ( 6) 00:07:49.854 9.255 - 9.305: 82.8839% ( 2) 00:07:49.854 9.305 - 9.354: 82.9213% ( 2) 00:07:49.854 9.354 - 9.403: 83.0150% ( 5) 00:07:49.854 9.403 - 9.452: 83.0524% ( 2) 00:07:49.854 9.452 - 9.502: 83.1086% ( 3) 00:07:49.854 9.502 - 9.551: 83.3895% ( 15) 00:07:49.854 9.551 - 9.600: 83.6704% ( 15) 00:07:49.854 9.600 - 9.649: 84.0637% ( 21) 00:07:49.854 9.649 - 9.698: 84.8315% ( 41) 00:07:49.854 9.698 - 9.748: 85.8240% ( 53) 00:07:49.854 9.748 - 9.797: 87.2285% ( 75) 00:07:49.854 9.797 - 9.846: 88.4831% ( 67) 00:07:49.854 9.846 - 9.895: 89.7191% ( 66) 00:07:49.854 9.895 - 9.945: 90.8240% ( 59) 00:07:49.854 9.945 - 9.994: 92.1348% ( 70) 00:07:49.854 9.994 - 10.043: 93.1648% ( 55) 00:07:49.854 10.043 - 10.092: 94.1948% ( 55) 00:07:49.854 10.092 - 10.142: 95.0375% ( 45) 00:07:49.854 10.142 - 10.191: 95.6929% ( 35) 00:07:49.854 10.191 - 10.240: 96.2172% ( 28) 00:07:49.854 10.240 - 10.289: 96.4794% ( 14) 00:07:49.854 10.289 - 10.338: 96.7978% ( 17) 00:07:49.854 10.338 - 10.388: 96.9288% ( 7) 00:07:49.854 10.388 - 10.437: 97.0787% ( 8) 00:07:49.854 10.437 - 10.486: 97.2097% ( 7) 00:07:49.854 10.486 - 10.535: 97.3783% ( 9) 00:07:49.854 10.535 - 10.585: 97.4532% ( 4) 00:07:49.854 10.585 - 10.634: 97.4906% ( 2) 00:07:49.854 10.634 - 10.683: 97.5468% ( 3) 00:07:49.854 10.683 - 10.732: 97.5843% ( 2) 00:07:49.854 10.732 - 10.782: 97.6404% ( 3) 00:07:49.854 10.831 - 10.880: 97.6592% ( 1) 00:07:49.854 10.880 - 10.929: 97.6779% ( 1) 00:07:49.854 10.929 - 10.978: 97.6966% ( 1) 00:07:49.854 11.028 - 11.077: 97.7903% ( 5) 00:07:49.854 11.126 - 11.175: 97.8090% ( 1) 00:07:49.854 11.225 - 11.274: 97.8277% ( 1) 00:07:49.854 11.471 - 11.520: 97.8464% ( 1) 00:07:49.854 11.520 - 11.569: 97.8652% ( 1) 00:07:49.854 11.569 - 11.618: 97.8839% ( 1) 00:07:49.854 11.668 - 11.717: 97.9026% ( 1) 00:07:49.854 11.717 - 11.766: 97.9213% ( 1) 00:07:49.854 11.766 - 11.815: 97.9401% ( 1) 00:07:49.854 11.865 - 11.914: 97.9588% ( 1) 00:07:49.854 11.963 - 12.012: 97.9775% ( 1) 00:07:49.854 12.012 - 12.062: 97.9963% ( 1) 00:07:49.854 12.062 - 12.111: 98.0337% ( 2) 00:07:49.854 12.111 - 12.160: 98.0524% ( 1) 00:07:49.854 12.160 - 12.209: 98.0712% ( 1) 00:07:49.854 12.258 - 12.308: 98.0899% ( 1) 00:07:49.854 12.406 - 12.455: 98.1086% ( 1) 00:07:49.854 12.455 - 12.505: 98.1273% ( 1) 00:07:49.854 12.505 - 12.554: 98.1461% ( 1) 00:07:49.854 12.554 - 12.603: 98.1648% ( 1) 00:07:49.854 12.702 - 12.800: 98.1835% ( 1) 00:07:49.854 12.898 - 12.997: 98.2210% ( 2) 00:07:49.854 12.997 - 13.095: 98.2397% ( 1) 00:07:49.854 13.095 - 13.194: 98.2959% ( 3) 00:07:49.854 13.194 - 13.292: 98.3708% ( 4) 00:07:49.854 13.292 - 13.391: 98.4270% ( 3) 00:07:49.854 13.391 - 13.489: 98.5393% ( 6) 00:07:49.854 13.489 - 13.588: 98.6142% ( 4) 00:07:49.854 13.686 - 13.785: 98.6517% ( 2) 00:07:49.854 13.785 - 13.883: 98.6891% ( 2) 00:07:49.854 13.883 - 13.982: 98.7453% ( 3) 00:07:49.854 14.178 - 14.277: 98.7640% ( 1) 00:07:49.854 14.277 - 14.375: 98.7828% ( 1) 00:07:49.854 14.375 - 14.474: 98.8202% ( 2) 00:07:49.854 14.474 - 14.572: 98.9139% ( 5) 00:07:49.854 14.769 - 14.868: 98.9326% ( 1) 00:07:49.854 14.966 - 15.065: 98.9513% ( 1) 00:07:49.854 15.163 - 15.262: 99.0075% ( 3) 00:07:49.854 15.262 - 15.360: 99.0824% ( 4) 00:07:49.854 15.458 - 15.557: 99.1573% ( 4) 00:07:49.854 15.557 - 15.655: 99.1760% ( 1) 00:07:49.854 15.655 - 15.754: 99.2135% ( 2) 00:07:49.854 15.754 - 15.852: 99.2322% ( 1) 00:07:49.854 15.951 - 16.049: 99.2697% ( 2) 00:07:49.854 16.345 - 16.443: 99.2884% ( 1) 00:07:49.854 16.443 - 16.542: 99.3071% ( 1) 00:07:49.854 16.542 - 16.640: 99.3258% ( 1) 00:07:49.854 16.738 - 16.837: 99.3446% ( 1) 00:07:49.854 16.837 - 16.935: 99.3820% ( 2) 00:07:49.854 16.935 - 17.034: 99.4007% ( 1) 00:07:49.854 17.034 - 17.132: 99.4195% ( 1) 00:07:49.854 17.132 - 17.231: 99.4382% ( 1) 00:07:49.854 17.526 - 17.625: 99.4569% ( 1) 00:07:49.854 17.920 - 18.018: 99.4944% ( 2) 00:07:49.854 18.018 - 18.117: 99.5318% ( 2) 00:07:49.855 18.215 - 18.314: 99.5506% ( 1) 00:07:49.855 18.314 - 18.412: 99.5693% ( 1) 00:07:49.855 18.511 - 18.609: 99.5880% ( 1) 00:07:49.855 18.806 - 18.905: 99.6255% ( 2) 00:07:49.855 19.003 - 19.102: 99.6629% ( 2) 00:07:49.855 19.102 - 19.200: 99.6816% ( 1) 00:07:49.855 19.495 - 19.594: 99.7004% ( 1) 00:07:49.855 20.382 - 20.480: 99.7191% ( 1) 00:07:49.855 20.972 - 21.071: 99.7378% ( 1) 00:07:49.855 21.071 - 21.169: 99.7566% ( 1) 00:07:49.855 21.858 - 21.957: 99.7753% ( 1) 00:07:49.855 23.434 - 23.532: 99.7940% ( 1) 00:07:49.855 23.729 - 23.828: 99.8127% ( 1) 00:07:49.855 23.828 - 23.926: 99.8315% ( 1) 00:07:49.855 25.403 - 25.600: 99.8502% ( 1) 00:07:49.855 31.508 - 31.705: 99.8689% ( 1) 00:07:49.855 35.446 - 35.643: 99.8876% ( 1) 00:07:49.855 37.809 - 38.006: 99.9064% ( 1) 00:07:49.855 39.778 - 39.975: 99.9251% ( 1) 00:07:49.855 42.535 - 42.732: 99.9438% ( 1) 00:07:49.855 66.166 - 66.560: 99.9625% ( 1) 00:07:49.855 103.975 - 104.763: 99.9813% ( 1) 00:07:49.855 319.803 - 321.378: 100.0000% ( 1) 00:07:49.855 00:07:49.855 00:07:49.855 real 0m1.238s 00:07:49.855 user 0m1.081s 00:07:49.855 sys 0m0.103s 00:07:49.855 12:33:49 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.855 ************************************ 00:07:49.855 END TEST nvme_overhead 00:07:49.855 ************************************ 00:07:49.855 12:33:49 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:49.855 12:33:49 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:49.855 12:33:49 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:49.855 12:33:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.855 12:33:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.855 ************************************ 00:07:49.855 START TEST nvme_arbitration 00:07:49.855 ************************************ 00:07:49.855 12:33:49 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:53.159 Initializing NVMe Controllers 00:07:53.159 Attached to 0000:00:13.0 00:07:53.159 Attached to 0000:00:10.0 00:07:53.159 Attached to 0000:00:11.0 00:07:53.159 Attached to 0000:00:12.0 00:07:53.159 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:07:53.159 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:07:53.159 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:07:53.159 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:53.159 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:53.159 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:53.159 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:53.159 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:53.159 Initialization complete. Launching workers. 00:07:53.159 Starting thread on core 1 with urgent priority queue 00:07:53.159 Starting thread on core 2 with urgent priority queue 00:07:53.159 Starting thread on core 3 with urgent priority queue 00:07:53.159 Starting thread on core 0 with urgent priority queue 00:07:53.159 QEMU NVMe Ctrl (12343 ) core 0: 768.00 IO/s 130.21 secs/100000 ios 00:07:53.159 QEMU NVMe Ctrl (12342 ) core 0: 768.00 IO/s 130.21 secs/100000 ios 00:07:53.159 QEMU NVMe Ctrl (12340 ) core 1: 789.33 IO/s 126.69 secs/100000 ios 00:07:53.159 QEMU NVMe Ctrl (12342 ) core 1: 789.33 IO/s 126.69 secs/100000 ios 00:07:53.159 QEMU NVMe Ctrl (12341 ) core 2: 853.33 IO/s 117.19 secs/100000 ios 00:07:53.159 QEMU NVMe Ctrl (12342 ) core 3: 810.67 IO/s 123.36 secs/100000 ios 00:07:53.159 ======================================================== 00:07:53.159 00:07:53.159 00:07:53.159 real 0m3.299s 00:07:53.159 user 0m9.181s 00:07:53.159 sys 0m0.130s 00:07:53.159 12:33:52 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.159 12:33:52 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:53.159 ************************************ 00:07:53.159 END TEST nvme_arbitration 00:07:53.159 ************************************ 00:07:53.159 12:33:52 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:53.159 12:33:52 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:53.159 12:33:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.159 12:33:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.159 ************************************ 00:07:53.159 START TEST nvme_single_aen 00:07:53.159 ************************************ 00:07:53.159 12:33:52 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:53.159 Asynchronous Event Request test 00:07:53.159 Attached to 0000:00:13.0 00:07:53.159 Attached to 0000:00:10.0 00:07:53.159 Attached to 0000:00:11.0 00:07:53.159 Attached to 0000:00:12.0 00:07:53.159 Reset controller to setup AER completions for this process 00:07:53.159 Registering asynchronous event callbacks... 00:07:53.159 Getting orig temperature thresholds of all controllers 00:07:53.159 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:53.159 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:53.159 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:53.159 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:53.159 Setting all controllers temperature threshold low to trigger AER 00:07:53.159 Waiting for all controllers temperature threshold to be set lower 00:07:53.159 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:53.159 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:53.159 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:53.159 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:53.159 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:53.159 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:53.159 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:53.159 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:53.159 Waiting for all controllers to trigger AER and reset threshold 00:07:53.159 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.159 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.159 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.159 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:53.159 Cleaning up... 00:07:53.420 00:07:53.420 real 0m0.254s 00:07:53.420 user 0m0.088s 00:07:53.420 sys 0m0.121s 00:07:53.420 12:33:52 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.420 12:33:52 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:53.420 ************************************ 00:07:53.420 END TEST nvme_single_aen 00:07:53.420 ************************************ 00:07:53.420 12:33:52 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:53.420 12:33:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.420 12:33:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.420 12:33:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.420 ************************************ 00:07:53.420 START TEST nvme_doorbell_aers 00:07:53.420 ************************************ 00:07:53.420 12:33:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:53.420 12:33:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:53.420 12:33:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:53.420 12:33:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:53.420 12:33:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:53.420 12:33:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:53.420 12:33:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:53.420 12:33:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:53.420 12:33:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:53.420 12:33:52 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:53.420 12:33:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:53.420 12:33:53 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:53.421 12:33:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:53.421 12:33:53 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:53.681 [2024-12-14 12:33:53.264462] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:03.706 Executing: test_write_invalid_db 00:08:03.706 Waiting for AER completion... 00:08:03.706 Failure: test_write_invalid_db 00:08:03.706 00:08:03.706 Executing: test_invalid_db_write_overflow_sq 00:08:03.706 Waiting for AER completion... 00:08:03.706 Failure: test_invalid_db_write_overflow_sq 00:08:03.706 00:08:03.706 Executing: test_invalid_db_write_overflow_cq 00:08:03.706 Waiting for AER completion... 00:08:03.706 Failure: test_invalid_db_write_overflow_cq 00:08:03.706 00:08:03.706 12:34:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:03.706 12:34:03 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:03.706 [2024-12-14 12:34:03.256956] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:13.768 Executing: test_write_invalid_db 00:08:13.768 Waiting for AER completion... 00:08:13.768 Failure: test_write_invalid_db 00:08:13.768 00:08:13.768 Executing: test_invalid_db_write_overflow_sq 00:08:13.768 Waiting for AER completion... 00:08:13.768 Failure: test_invalid_db_write_overflow_sq 00:08:13.768 00:08:13.768 Executing: test_invalid_db_write_overflow_cq 00:08:13.768 Waiting for AER completion... 00:08:13.768 Failure: test_invalid_db_write_overflow_cq 00:08:13.768 00:08:13.768 12:34:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:13.768 12:34:13 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:13.768 [2024-12-14 12:34:13.300468] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:23.760 Executing: test_write_invalid_db 00:08:23.760 Waiting for AER completion... 00:08:23.760 Failure: test_write_invalid_db 00:08:23.760 00:08:23.760 Executing: test_invalid_db_write_overflow_sq 00:08:23.760 Waiting for AER completion... 00:08:23.760 Failure: test_invalid_db_write_overflow_sq 00:08:23.760 00:08:23.760 Executing: test_invalid_db_write_overflow_cq 00:08:23.760 Waiting for AER completion... 00:08:23.760 Failure: test_invalid_db_write_overflow_cq 00:08:23.760 00:08:23.760 12:34:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:23.760 12:34:23 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:23.760 [2024-12-14 12:34:23.329269] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:33.738 Executing: test_write_invalid_db 00:08:33.738 Waiting for AER completion... 00:08:33.738 Failure: test_write_invalid_db 00:08:33.738 00:08:33.738 Executing: test_invalid_db_write_overflow_sq 00:08:33.738 Waiting for AER completion... 00:08:33.738 Failure: test_invalid_db_write_overflow_sq 00:08:33.738 00:08:33.738 Executing: test_invalid_db_write_overflow_cq 00:08:33.738 Waiting for AER completion... 00:08:33.738 Failure: test_invalid_db_write_overflow_cq 00:08:33.738 00:08:33.738 ************************************ 00:08:33.738 END TEST nvme_doorbell_aers 00:08:33.738 ************************************ 00:08:33.738 00:08:33.738 real 0m40.191s 00:08:33.738 user 0m34.130s 00:08:33.738 sys 0m5.684s 00:08:33.738 12:34:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.739 12:34:33 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:33.739 12:34:33 nvme -- nvme/nvme.sh@97 -- # uname 00:08:33.739 12:34:33 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:33.739 12:34:33 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:33.739 12:34:33 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:33.739 12:34:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.739 12:34:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.739 ************************************ 00:08:33.739 START TEST nvme_multi_aen 00:08:33.739 ************************************ 00:08:33.739 12:34:33 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:33.739 [2024-12-14 12:34:33.407550] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:33.739 [2024-12-14 12:34:33.408011] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:33.739 [2024-12-14 12:34:33.408093] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:33.739 [2024-12-14 12:34:33.409305] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:33.739 [2024-12-14 12:34:33.409451] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:33.739 [2024-12-14 12:34:33.409520] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:33.739 [2024-12-14 12:34:33.410654] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:33.739 [2024-12-14 12:34:33.410791] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:33.739 [2024-12-14 12:34:33.410883] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:33.739 [2024-12-14 12:34:33.411996] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:33.739 [2024-12-14 12:34:33.412141] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:33.739 [2024-12-14 12:34:33.412232] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64935) is not found. Dropping the request. 00:08:33.739 Child process pid: 65456 00:08:34.010 [Child] Asynchronous Event Request test 00:08:34.010 [Child] Attached to 0000:00:13.0 00:08:34.010 [Child] Attached to 0000:00:10.0 00:08:34.010 [Child] Attached to 0000:00:11.0 00:08:34.010 [Child] Attached to 0000:00:12.0 00:08:34.010 [Child] Registering asynchronous event callbacks... 00:08:34.010 [Child] Getting orig temperature thresholds of all controllers 00:08:34.010 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.010 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.010 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.010 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.010 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:34.010 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.010 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.010 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.010 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.010 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.010 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.010 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.010 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.010 [Child] Cleaning up... 00:08:34.010 Asynchronous Event Request test 00:08:34.010 Attached to 0000:00:13.0 00:08:34.010 Attached to 0000:00:10.0 00:08:34.010 Attached to 0000:00:11.0 00:08:34.010 Attached to 0000:00:12.0 00:08:34.010 Reset controller to setup AER completions for this process 00:08:34.010 Registering asynchronous event callbacks... 00:08:34.010 Getting orig temperature thresholds of all controllers 00:08:34.010 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.010 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.010 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.010 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:34.010 Setting all controllers temperature threshold low to trigger AER 00:08:34.010 Waiting for all controllers temperature threshold to be set lower 00:08:34.010 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.010 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:34.010 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.010 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:34.010 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.010 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:34.010 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:34.010 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:34.010 Waiting for all controllers to trigger AER and reset threshold 00:08:34.010 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.010 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.010 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.010 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:34.010 Cleaning up... 00:08:34.010 00:08:34.010 real 0m0.456s 00:08:34.010 user 0m0.143s 00:08:34.010 sys 0m0.183s 00:08:34.010 12:34:33 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.010 12:34:33 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:34.010 ************************************ 00:08:34.010 END TEST nvme_multi_aen 00:08:34.010 ************************************ 00:08:34.010 12:34:33 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:34.010 12:34:33 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:34.010 12:34:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.010 12:34:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.010 ************************************ 00:08:34.010 START TEST nvme_startup 00:08:34.010 ************************************ 00:08:34.010 12:34:33 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:34.278 Initializing NVMe Controllers 00:08:34.278 Attached to 0000:00:13.0 00:08:34.278 Attached to 0000:00:10.0 00:08:34.278 Attached to 0000:00:11.0 00:08:34.278 Attached to 0000:00:12.0 00:08:34.278 Initialization complete. 00:08:34.278 Time used:132966.406 (us). 00:08:34.278 00:08:34.278 real 0m0.189s 00:08:34.278 user 0m0.066s 00:08:34.278 sys 0m0.088s 00:08:34.278 ************************************ 00:08:34.278 END TEST nvme_startup 00:08:34.279 ************************************ 00:08:34.279 12:34:33 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.279 12:34:33 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:34.279 12:34:33 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:34.279 12:34:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.279 12:34:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.279 12:34:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.279 ************************************ 00:08:34.279 START TEST nvme_multi_secondary 00:08:34.279 ************************************ 00:08:34.279 12:34:33 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:34.279 12:34:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65507 00:08:34.279 12:34:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:34.279 12:34:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65508 00:08:34.279 12:34:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:34.279 12:34:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:37.584 Initializing NVMe Controllers 00:08:37.584 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:37.584 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:37.584 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:37.584 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:37.584 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:37.584 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:37.584 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:37.584 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:37.584 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:37.584 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:37.584 Initialization complete. Launching workers. 00:08:37.584 ======================================================== 00:08:37.584 Latency(us) 00:08:37.584 Device Information : IOPS MiB/s Average min max 00:08:37.584 PCIE (0000:00:13.0) NSID 1 from core 1: 7062.19 27.59 2265.15 1016.31 8018.42 00:08:37.584 PCIE (0000:00:10.0) NSID 1 from core 1: 7062.19 27.59 2264.33 935.69 7576.28 00:08:37.584 PCIE (0000:00:11.0) NSID 1 from core 1: 7062.19 27.59 2265.47 924.68 7304.91 00:08:37.584 PCIE (0000:00:12.0) NSID 1 from core 1: 7062.19 27.59 2265.48 918.95 7424.32 00:08:37.584 PCIE (0000:00:12.0) NSID 2 from core 1: 7062.19 27.59 2265.53 999.12 8208.40 00:08:37.584 PCIE (0000:00:12.0) NSID 3 from core 1: 7062.19 27.59 2265.51 1020.70 8135.59 00:08:37.584 ======================================================== 00:08:37.584 Total : 42373.15 165.52 2265.25 918.95 8208.40 00:08:37.584 00:08:37.846 Initializing NVMe Controllers 00:08:37.846 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:37.846 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:37.846 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:37.846 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:37.846 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:37.846 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:37.846 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:37.846 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:37.846 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:37.846 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:37.846 Initialization complete. Launching workers. 00:08:37.846 ======================================================== 00:08:37.846 Latency(us) 00:08:37.846 Device Information : IOPS MiB/s Average min max 00:08:37.846 PCIE (0000:00:13.0) NSID 1 from core 2: 1421.07 5.55 11258.47 1615.34 19347.17 00:08:37.846 PCIE (0000:00:10.0) NSID 1 from core 2: 1421.07 5.55 11257.83 1570.68 22516.11 00:08:37.846 PCIE (0000:00:11.0) NSID 1 from core 2: 1421.07 5.55 11258.39 1559.75 19816.77 00:08:37.846 PCIE (0000:00:12.0) NSID 1 from core 2: 1421.07 5.55 11258.48 1547.93 20471.28 00:08:37.846 PCIE (0000:00:12.0) NSID 2 from core 2: 1421.07 5.55 11258.45 1391.70 20979.12 00:08:37.846 PCIE (0000:00:12.0) NSID 3 from core 2: 1421.07 5.55 11258.21 1172.94 21090.20 00:08:37.846 ======================================================== 00:08:37.846 Total : 8526.40 33.31 11258.30 1172.94 22516.11 00:08:37.846 00:08:37.846 12:34:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65507 00:08:39.763 Initializing NVMe Controllers 00:08:39.763 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:39.763 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:39.763 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:39.763 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:39.763 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:39.763 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:39.763 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:39.763 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:39.763 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:39.763 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:39.763 Initialization complete. Launching workers. 00:08:39.763 ======================================================== 00:08:39.763 Latency(us) 00:08:39.763 Device Information : IOPS MiB/s Average min max 00:08:39.763 PCIE (0000:00:13.0) NSID 1 from core 0: 10020.46 39.14 1596.35 740.30 6486.15 00:08:39.763 PCIE (0000:00:10.0) NSID 1 from core 0: 10020.46 39.14 1595.48 715.96 5800.85 00:08:39.763 PCIE (0000:00:11.0) NSID 1 from core 0: 10020.46 39.14 1596.34 714.91 5837.19 00:08:39.763 PCIE (0000:00:12.0) NSID 1 from core 0: 10020.46 39.14 1596.32 701.23 6026.07 00:08:39.763 PCIE (0000:00:12.0) NSID 2 from core 0: 10020.46 39.14 1596.31 654.23 6474.25 00:08:39.763 PCIE (0000:00:12.0) NSID 3 from core 0: 10020.46 39.14 1596.30 629.56 6905.86 00:08:39.763 ======================================================== 00:08:39.763 Total : 60122.75 234.85 1596.18 629.56 6905.86 00:08:39.763 00:08:39.763 12:34:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65508 00:08:39.763 12:34:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65577 00:08:39.763 12:34:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:39.763 12:34:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65578 00:08:39.763 12:34:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:39.763 12:34:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:43.067 Initializing NVMe Controllers 00:08:43.067 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:43.067 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:43.067 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:43.067 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:43.067 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:43.067 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:43.067 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:43.067 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:43.067 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:43.067 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:43.067 Initialization complete. Launching workers. 00:08:43.067 ======================================================== 00:08:43.067 Latency(us) 00:08:43.067 Device Information : IOPS MiB/s Average min max 00:08:43.067 PCIE (0000:00:13.0) NSID 1 from core 1: 6894.21 26.93 2320.36 777.95 6991.43 00:08:43.067 PCIE (0000:00:10.0) NSID 1 from core 1: 6894.21 26.93 2319.64 752.62 6809.82 00:08:43.067 PCIE (0000:00:11.0) NSID 1 from core 1: 6894.21 26.93 2320.77 755.77 6538.74 00:08:43.067 PCIE (0000:00:12.0) NSID 1 from core 1: 6894.21 26.93 2320.74 756.59 7724.60 00:08:43.067 PCIE (0000:00:12.0) NSID 2 from core 1: 6894.21 26.93 2320.98 746.99 7990.26 00:08:43.067 PCIE (0000:00:12.0) NSID 3 from core 1: 6894.21 26.93 2321.23 792.87 7678.37 00:08:43.067 ======================================================== 00:08:43.067 Total : 41365.27 161.58 2320.62 746.99 7990.26 00:08:43.067 00:08:43.067 Initializing NVMe Controllers 00:08:43.067 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:43.067 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:43.067 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:43.067 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:43.067 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:43.067 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:43.067 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:43.067 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:43.067 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:43.067 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:43.067 Initialization complete. Launching workers. 00:08:43.067 ======================================================== 00:08:43.067 Latency(us) 00:08:43.067 Device Information : IOPS MiB/s Average min max 00:08:43.067 PCIE (0000:00:13.0) NSID 1 from core 0: 6751.09 26.37 2369.53 868.22 6596.74 00:08:43.067 PCIE (0000:00:10.0) NSID 1 from core 0: 6751.09 26.37 2368.46 881.40 6939.07 00:08:43.067 PCIE (0000:00:11.0) NSID 1 from core 0: 6751.09 26.37 2369.36 878.78 6711.60 00:08:43.067 PCIE (0000:00:12.0) NSID 1 from core 0: 6751.09 26.37 2369.26 669.11 6414.02 00:08:43.067 PCIE (0000:00:12.0) NSID 2 from core 0: 6751.09 26.37 2369.19 644.51 7213.08 00:08:43.067 PCIE (0000:00:12.0) NSID 3 from core 0: 6751.09 26.37 2369.11 611.32 6443.67 00:08:43.067 ======================================================== 00:08:43.067 Total : 40506.56 158.23 2369.15 611.32 7213.08 00:08:43.067 00:08:44.984 Initializing NVMe Controllers 00:08:44.984 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.984 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.984 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.984 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.984 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:44.984 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:44.984 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:44.984 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:44.984 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:44.984 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:44.984 Initialization complete. Launching workers. 00:08:44.984 ======================================================== 00:08:44.984 Latency(us) 00:08:44.984 Device Information : IOPS MiB/s Average min max 00:08:44.984 PCIE (0000:00:13.0) NSID 1 from core 2: 3022.82 11.81 5292.14 810.70 20131.94 00:08:44.984 PCIE (0000:00:10.0) NSID 1 from core 2: 3022.82 11.81 5291.40 798.79 20870.85 00:08:44.984 PCIE (0000:00:11.0) NSID 1 from core 2: 3022.82 11.81 5292.69 819.67 22500.43 00:08:44.984 PCIE (0000:00:12.0) NSID 1 from core 2: 3022.82 11.81 5292.57 817.55 20685.05 00:08:44.984 PCIE (0000:00:12.0) NSID 2 from core 2: 3022.82 11.81 5292.21 827.71 21879.31 00:08:44.984 PCIE (0000:00:12.0) NSID 3 from core 2: 3022.82 11.81 5292.38 824.64 21128.14 00:08:44.984 ======================================================== 00:08:44.984 Total : 18136.90 70.85 5292.23 798.79 22500.43 00:08:44.984 00:08:44.984 12:34:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65577 00:08:44.984 ************************************ 00:08:44.984 END TEST nvme_multi_secondary 00:08:44.984 ************************************ 00:08:44.984 12:34:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65578 00:08:44.984 00:08:44.984 real 0m10.556s 00:08:44.984 user 0m18.444s 00:08:44.984 sys 0m0.681s 00:08:44.984 12:34:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.984 12:34:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:44.984 12:34:44 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:44.984 12:34:44 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:44.984 12:34:44 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64521 ]] 00:08:44.984 12:34:44 nvme -- common/autotest_common.sh@1094 -- # kill 64521 00:08:44.984 12:34:44 nvme -- common/autotest_common.sh@1095 -- # wait 64521 00:08:44.984 [2024-12-14 12:34:44.515872] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.515981] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.516025] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.516054] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.519750] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.519832] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.519860] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.519887] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.522816] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.522954] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.522968] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.522978] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.524548] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.524648] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.524714] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.524789] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65455) is not found. Dropping the request. 00:08:44.984 [2024-12-14 12:34:44.629452] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:08:44.984 12:34:44 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:44.984 12:34:44 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:44.984 12:34:44 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:44.984 12:34:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:44.984 12:34:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.984 12:34:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:44.984 ************************************ 00:08:44.984 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:44.984 ************************************ 00:08:44.984 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:45.246 * Looking for test storage... 00:08:45.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:45.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.246 --rc genhtml_branch_coverage=1 00:08:45.246 --rc genhtml_function_coverage=1 00:08:45.246 --rc genhtml_legend=1 00:08:45.246 --rc geninfo_all_blocks=1 00:08:45.246 --rc geninfo_unexecuted_blocks=1 00:08:45.246 00:08:45.246 ' 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:45.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.246 --rc genhtml_branch_coverage=1 00:08:45.246 --rc genhtml_function_coverage=1 00:08:45.246 --rc genhtml_legend=1 00:08:45.246 --rc geninfo_all_blocks=1 00:08:45.246 --rc geninfo_unexecuted_blocks=1 00:08:45.246 00:08:45.246 ' 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:45.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.246 --rc genhtml_branch_coverage=1 00:08:45.246 --rc genhtml_function_coverage=1 00:08:45.246 --rc genhtml_legend=1 00:08:45.246 --rc geninfo_all_blocks=1 00:08:45.246 --rc geninfo_unexecuted_blocks=1 00:08:45.246 00:08:45.246 ' 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:45.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.246 --rc genhtml_branch_coverage=1 00:08:45.246 --rc genhtml_function_coverage=1 00:08:45.246 --rc genhtml_legend=1 00:08:45.246 --rc geninfo_all_blocks=1 00:08:45.246 --rc geninfo_unexecuted_blocks=1 00:08:45.246 00:08:45.246 ' 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:45.246 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:45.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65744 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65744 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65744 ']' 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.247 12:34:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:45.247 [2024-12-14 12:34:44.936650] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:45.247 [2024-12-14 12:34:44.936921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65744 ] 00:08:45.508 [2024-12-14 12:34:45.105050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.508 [2024-12-14 12:34:45.202981] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.508 [2024-12-14 12:34:45.203299] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.508 [2024-12-14 12:34:45.203493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.508 [2024-12-14 12:34:45.203510] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.081 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.081 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:46.081 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:46.081 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.081 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:46.342 nvme0n1 00:08:46.342 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.342 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:46.342 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_07aex.txt 00:08:46.342 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:46.342 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.342 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:46.342 true 00:08:46.342 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.342 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:46.342 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734179685 00:08:46.342 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65767 00:08:46.342 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:46.342 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:46.342 12:34:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:48.254 [2024-12-14 12:34:47.904979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:48.254 [2024-12-14 12:34:47.905259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:48.254 [2024-12-14 12:34:47.905284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:48.254 [2024-12-14 12:34:47.905298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:48.254 [2024-12-14 12:34:47.907751] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:48.254 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65767 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65767 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65767 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_07aex.txt 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:48.254 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:48.513 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:48.513 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:48.513 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:48.513 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_07aex.txt 00:08:48.513 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65744 00:08:48.513 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65744 ']' 00:08:48.513 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65744 00:08:48.513 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:48.513 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.513 12:34:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65744 00:08:48.513 killing process with pid 65744 00:08:48.513 12:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.513 12:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.513 12:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65744' 00:08:48.513 12:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65744 00:08:48.513 12:34:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65744 00:08:49.918 12:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:49.918 12:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:49.918 00:08:49.918 real 0m4.889s 00:08:49.918 user 0m17.363s 00:08:49.918 sys 0m0.506s 00:08:49.918 ************************************ 00:08:49.918 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:49.918 ************************************ 00:08:49.918 12:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.918 12:34:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 12:34:49 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:49.918 12:34:49 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:49.918 12:34:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.918 12:34:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.918 12:34:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.918 ************************************ 00:08:49.918 START TEST nvme_fio 00:08:49.918 ************************************ 00:08:49.918 12:34:49 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:49.918 12:34:49 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:49.918 12:34:49 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:49.918 12:34:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:49.918 12:34:49 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:49.918 12:34:49 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:49.918 12:34:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:49.919 12:34:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:49.919 12:34:49 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:50.179 12:34:49 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:50.179 12:34:49 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:50.179 12:34:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:50.179 12:34:49 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:50.179 12:34:49 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:50.179 12:34:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:50.179 12:34:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:50.179 12:34:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:50.179 12:34:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:50.439 12:34:50 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:50.439 12:34:50 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:50.439 12:34:50 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:50.700 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:50.700 fio-3.35 00:08:50.700 Starting 1 thread 00:08:57.288 00:08:57.288 test: (groupid=0, jobs=1): err= 0: pid=65903: Sat Dec 14 12:34:56 2024 00:08:57.288 read: IOPS=21.0k, BW=81.9MiB/s (85.9MB/s)(164MiB/2001msec) 00:08:57.288 slat (nsec): min=3887, max=64064, avg=5864.48, stdev=2558.46 00:08:57.288 clat (usec): min=290, max=11804, avg=3042.72, stdev=975.43 00:08:57.288 lat (usec): min=296, max=11809, avg=3048.59, stdev=976.77 00:08:57.288 clat percentiles (usec): 00:08:57.288 | 1.00th=[ 2376], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2540], 00:08:57.288 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2638], 60.00th=[ 2704], 00:08:57.288 | 70.00th=[ 2769], 80.00th=[ 3195], 90.00th=[ 4424], 95.00th=[ 5538], 00:08:57.288 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 8979], 99.95th=[10290], 00:08:57.288 | 99.99th=[11207] 00:08:57.288 bw ( KiB/s): min=77904, max=87048, per=99.54%, avg=83456.00, stdev=4876.92, samples=3 00:08:57.288 iops : min=19476, max=21762, avg=20864.00, stdev=1219.23, samples=3 00:08:57.288 write: IOPS=20.9k, BW=81.5MiB/s (85.4MB/s)(163MiB/2001msec); 0 zone resets 00:08:57.288 slat (nsec): min=4184, max=81250, avg=6284.36, stdev=2563.43 00:08:57.288 clat (usec): min=214, max=15533, avg=3053.72, stdev=1010.52 00:08:57.288 lat (usec): min=220, max=15538, avg=3060.00, stdev=1011.77 00:08:57.288 clat percentiles (usec): 00:08:57.288 | 1.00th=[ 2376], 5.00th=[ 2474], 10.00th=[ 2507], 20.00th=[ 2573], 00:08:57.288 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:08:57.288 | 70.00th=[ 2769], 80.00th=[ 3195], 90.00th=[ 4424], 95.00th=[ 5604], 00:08:57.288 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[10945], 99.95th=[11994], 00:08:57.288 | 99.99th=[14353] 00:08:57.288 bw ( KiB/s): min=78432, max=86760, per=100.00%, avg=83530.67, stdev=4467.63, samples=3 00:08:57.288 iops : min=19608, max=21690, avg=20882.67, stdev=1116.91, samples=3 00:08:57.288 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:08:57.288 lat (msec) : 2=0.13%, 4=87.52%, 10=12.23%, 20=0.10% 00:08:57.288 cpu : usr=99.05%, sys=0.15%, ctx=4, majf=0, minf=607 00:08:57.288 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:57.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:57.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:57.288 issued rwts: total=41943,41727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:57.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:57.288 00:08:57.288 Run status group 0 (all jobs): 00:08:57.288 READ: bw=81.9MiB/s (85.9MB/s), 81.9MiB/s-81.9MiB/s (85.9MB/s-85.9MB/s), io=164MiB (172MB), run=2001-2001msec 00:08:57.288 WRITE: bw=81.5MiB/s (85.4MB/s), 81.5MiB/s-81.5MiB/s (85.4MB/s-85.4MB/s), io=163MiB (171MB), run=2001-2001msec 00:08:57.288 ----------------------------------------------------- 00:08:57.288 Suppressions used: 00:08:57.288 count bytes template 00:08:57.288 1 32 /usr/src/fio/parse.c 00:08:57.288 1 8 libtcmalloc_minimal.so 00:08:57.289 ----------------------------------------------------- 00:08:57.289 00:08:57.289 12:34:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:57.289 12:34:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:57.289 12:34:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:57.289 12:34:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:57.289 12:34:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:57.289 12:34:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:57.289 12:34:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:57.289 12:34:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:57.289 12:34:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:57.289 12:34:57 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:57.550 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:57.550 fio-3.35 00:08:57.550 Starting 1 thread 00:09:05.689 00:09:05.689 test: (groupid=0, jobs=1): err= 0: pid=65964: Sat Dec 14 12:35:03 2024 00:09:05.689 read: IOPS=21.9k, BW=85.5MiB/s (89.7MB/s)(171MiB/2001msec) 00:09:05.689 slat (nsec): min=4165, max=53312, avg=5078.30, stdev=2435.86 00:09:05.689 clat (usec): min=621, max=8155, avg=2923.03, stdev=914.53 00:09:05.689 lat (usec): min=634, max=8168, avg=2928.11, stdev=916.14 00:09:05.689 clat percentiles (usec): 00:09:05.689 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2507], 00:09:05.689 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2704], 00:09:05.689 | 70.00th=[ 2769], 80.00th=[ 2900], 90.00th=[ 3589], 95.00th=[ 5473], 00:09:05.689 | 99.00th=[ 6652], 99.50th=[ 6783], 99.90th=[ 7439], 99.95th=[ 7701], 00:09:05.689 | 99.99th=[ 8029] 00:09:05.689 bw ( KiB/s): min=83840, max=91872, per=100.00%, avg=88253.33, stdev=4074.54, samples=3 00:09:05.689 iops : min=20960, max=22968, avg=22063.33, stdev=1018.64, samples=3 00:09:05.689 write: IOPS=21.7k, BW=84.9MiB/s (89.0MB/s)(170MiB/2001msec); 0 zone resets 00:09:05.689 slat (nsec): min=4264, max=65272, avg=5325.85, stdev=2403.78 00:09:05.689 clat (usec): min=710, max=8267, avg=2922.78, stdev=900.69 00:09:05.689 lat (usec): min=723, max=8284, avg=2928.11, stdev=902.22 00:09:05.689 clat percentiles (usec): 00:09:05.689 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2507], 00:09:05.689 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2704], 00:09:05.689 | 70.00th=[ 2802], 80.00th=[ 2900], 90.00th=[ 3523], 95.00th=[ 5342], 00:09:05.690 | 99.00th=[ 6718], 99.50th=[ 6783], 99.90th=[ 7177], 99.95th=[ 7767], 00:09:05.690 | 99.99th=[ 8160] 00:09:05.690 bw ( KiB/s): min=83505, max=91696, per=100.00%, avg=88392.33, stdev=4319.04, samples=3 00:09:05.690 iops : min=20876, max=22924, avg=22098.00, stdev=1079.90, samples=3 00:09:05.690 lat (usec) : 750=0.01%, 1000=0.01% 00:09:05.690 lat (msec) : 2=0.50%, 4=90.87%, 10=8.63% 00:09:05.690 cpu : usr=99.10%, sys=0.25%, ctx=2, majf=0, minf=607 00:09:05.690 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:05.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:05.690 issued rwts: total=43797,43498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.690 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:05.690 00:09:05.690 Run status group 0 (all jobs): 00:09:05.690 READ: bw=85.5MiB/s (89.7MB/s), 85.5MiB/s-85.5MiB/s (89.7MB/s-89.7MB/s), io=171MiB (179MB), run=2001-2001msec 00:09:05.690 WRITE: bw=84.9MiB/s (89.0MB/s), 84.9MiB/s-84.9MiB/s (89.0MB/s-89.0MB/s), io=170MiB (178MB), run=2001-2001msec 00:09:05.690 ----------------------------------------------------- 00:09:05.690 Suppressions used: 00:09:05.690 count bytes template 00:09:05.690 1 32 /usr/src/fio/parse.c 00:09:05.690 1 8 libtcmalloc_minimal.so 00:09:05.690 ----------------------------------------------------- 00:09:05.690 00:09:05.690 12:35:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:05.690 12:35:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:05.690 12:35:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:05.690 12:35:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:05.690 12:35:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:05.690 12:35:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:05.690 12:35:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:05.690 12:35:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:05.690 12:35:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:05.690 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:05.690 fio-3.35 00:09:05.690 Starting 1 thread 00:09:12.276 00:09:12.276 test: (groupid=0, jobs=1): err= 0: pid=66020: Sat Dec 14 12:35:11 2024 00:09:12.276 read: IOPS=20.0k, BW=78.1MiB/s (81.9MB/s)(156MiB/2001msec) 00:09:12.276 slat (nsec): min=4793, max=71837, avg=5917.41, stdev=2494.93 00:09:12.276 clat (usec): min=247, max=9836, avg=3187.03, stdev=972.11 00:09:12.276 lat (usec): min=253, max=9896, avg=3192.94, stdev=973.42 00:09:12.276 clat percentiles (usec): 00:09:12.276 | 1.00th=[ 2409], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2638], 00:09:12.276 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2835], 60.00th=[ 2900], 00:09:12.276 | 70.00th=[ 3032], 80.00th=[ 3392], 90.00th=[ 4555], 95.00th=[ 5604], 00:09:12.276 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 8029], 99.95th=[ 8455], 00:09:12.276 | 99.99th=[ 9765] 00:09:12.276 bw ( KiB/s): min=78408, max=80952, per=99.17%, avg=79312.00, stdev=1422.76, samples=3 00:09:12.276 iops : min=19602, max=20238, avg=19828.00, stdev=355.69, samples=3 00:09:12.276 write: IOPS=20.0k, BW=77.9MiB/s (81.7MB/s)(156MiB/2001msec); 0 zone resets 00:09:12.276 slat (nsec): min=4900, max=70413, avg=6196.64, stdev=2385.50 00:09:12.276 clat (usec): min=211, max=9754, avg=3194.05, stdev=970.79 00:09:12.276 lat (usec): min=216, max=9772, avg=3200.25, stdev=972.03 00:09:12.276 clat percentiles (usec): 00:09:12.276 | 1.00th=[ 2442], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2638], 00:09:12.276 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2835], 60.00th=[ 2900], 00:09:12.276 | 70.00th=[ 3032], 80.00th=[ 3392], 90.00th=[ 4490], 95.00th=[ 5604], 00:09:12.276 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 8029], 99.95th=[ 8717], 00:09:12.276 | 99.99th=[ 9634] 00:09:12.276 bw ( KiB/s): min=78272, max=81400, per=99.41%, avg=79341.33, stdev=1783.31, samples=3 00:09:12.276 iops : min=19568, max=20352, avg=19835.33, stdev=447.54, samples=3 00:09:12.276 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:09:12.276 lat (msec) : 2=0.09%, 4=86.25%, 10=13.62% 00:09:12.276 cpu : usr=99.00%, sys=0.20%, ctx=2, majf=0, minf=607 00:09:12.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:12.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.276 issued rwts: total=40008,39924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.276 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.276 00:09:12.276 Run status group 0 (all jobs): 00:09:12.276 READ: bw=78.1MiB/s (81.9MB/s), 78.1MiB/s-78.1MiB/s (81.9MB/s-81.9MB/s), io=156MiB (164MB), run=2001-2001msec 00:09:12.276 WRITE: bw=77.9MiB/s (81.7MB/s), 77.9MiB/s-77.9MiB/s (81.7MB/s-81.7MB/s), io=156MiB (164MB), run=2001-2001msec 00:09:12.276 ----------------------------------------------------- 00:09:12.276 Suppressions used: 00:09:12.276 count bytes template 00:09:12.276 1 32 /usr/src/fio/parse.c 00:09:12.276 1 8 libtcmalloc_minimal.so 00:09:12.276 ----------------------------------------------------- 00:09:12.276 00:09:12.276 12:35:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:12.276 12:35:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:12.276 12:35:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:12.276 12:35:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:12.276 12:35:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:12.276 12:35:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:12.276 12:35:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:12.276 12:35:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:12.276 12:35:11 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:12.276 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:12.276 fio-3.35 00:09:12.276 Starting 1 thread 00:09:24.578 00:09:24.578 test: (groupid=0, jobs=1): err= 0: pid=66082: Sat Dec 14 12:35:22 2024 00:09:24.578 read: IOPS=20.6k, BW=80.5MiB/s (84.4MB/s)(161MiB/2001msec) 00:09:24.578 slat (nsec): min=4804, max=66954, avg=5890.75, stdev=2474.80 00:09:24.578 clat (usec): min=225, max=10433, avg=3097.84, stdev=993.99 00:09:24.578 lat (usec): min=231, max=10500, avg=3103.73, stdev=995.40 00:09:24.578 clat percentiles (usec): 00:09:24.578 | 1.00th=[ 2311], 5.00th=[ 2442], 10.00th=[ 2474], 20.00th=[ 2540], 00:09:24.578 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2802], 00:09:24.578 | 70.00th=[ 2933], 80.00th=[ 3261], 90.00th=[ 4490], 95.00th=[ 5604], 00:09:24.578 | 99.00th=[ 6980], 99.50th=[ 7242], 99.90th=[ 7898], 99.95th=[ 8586], 00:09:24.578 | 99.99th=[10028] 00:09:24.578 bw ( KiB/s): min=78760, max=83344, per=97.89%, avg=80674.67, stdev=2383.36, samples=3 00:09:24.578 iops : min=19690, max=20836, avg=20168.67, stdev=595.84, samples=3 00:09:24.578 write: IOPS=20.5k, BW=80.2MiB/s (84.1MB/s)(161MiB/2001msec); 0 zone resets 00:09:24.578 slat (nsec): min=4880, max=53585, avg=6298.33, stdev=2556.00 00:09:24.578 clat (usec): min=386, max=10325, avg=3098.12, stdev=988.53 00:09:24.578 lat (usec): min=393, max=10345, avg=3104.42, stdev=989.93 00:09:24.578 clat percentiles (usec): 00:09:24.578 | 1.00th=[ 2311], 5.00th=[ 2442], 10.00th=[ 2474], 20.00th=[ 2573], 00:09:24.578 | 30.00th=[ 2606], 40.00th=[ 2671], 50.00th=[ 2737], 60.00th=[ 2802], 00:09:24.578 | 70.00th=[ 2933], 80.00th=[ 3261], 90.00th=[ 4424], 95.00th=[ 5604], 00:09:24.578 | 99.00th=[ 7046], 99.50th=[ 7308], 99.90th=[ 8029], 99.95th=[ 8717], 00:09:24.578 | 99.99th=[ 9896] 00:09:24.578 bw ( KiB/s): min=78616, max=83264, per=98.33%, avg=80765.33, stdev=2343.61, samples=3 00:09:24.578 iops : min=19654, max=20816, avg=20191.33, stdev=585.90, samples=3 00:09:24.578 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:24.578 lat (msec) : 2=0.09%, 4=87.64%, 10=12.22%, 20=0.01% 00:09:24.578 cpu : usr=99.15%, sys=0.10%, ctx=2, majf=0, minf=606 00:09:24.578 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:24.578 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.578 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.578 issued rwts: total=41226,41090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.578 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.578 00:09:24.578 Run status group 0 (all jobs): 00:09:24.578 READ: bw=80.5MiB/s (84.4MB/s), 80.5MiB/s-80.5MiB/s (84.4MB/s-84.4MB/s), io=161MiB (169MB), run=2001-2001msec 00:09:24.578 WRITE: bw=80.2MiB/s (84.1MB/s), 80.2MiB/s-80.2MiB/s (84.1MB/s-84.1MB/s), io=161MiB (168MB), run=2001-2001msec 00:09:24.578 ----------------------------------------------------- 00:09:24.578 Suppressions used: 00:09:24.578 count bytes template 00:09:24.578 1 32 /usr/src/fio/parse.c 00:09:24.578 1 8 libtcmalloc_minimal.so 00:09:24.578 ----------------------------------------------------- 00:09:24.578 00:09:24.578 12:35:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:24.578 12:35:22 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:24.578 00:09:24.578 real 0m32.699s 00:09:24.578 user 0m16.980s 00:09:24.578 sys 0m30.231s 00:09:24.578 12:35:22 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.578 ************************************ 00:09:24.578 END TEST nvme_fio 00:09:24.578 ************************************ 00:09:24.578 12:35:22 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:24.578 00:09:24.578 real 1m43.990s 00:09:24.578 user 3m40.180s 00:09:24.578 sys 0m42.176s 00:09:24.578 12:35:22 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.578 12:35:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.578 ************************************ 00:09:24.578 END TEST nvme 00:09:24.578 ************************************ 00:09:24.578 12:35:22 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:24.578 12:35:22 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:24.578 12:35:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.578 12:35:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.578 12:35:22 -- common/autotest_common.sh@10 -- # set +x 00:09:24.578 ************************************ 00:09:24.578 START TEST nvme_scc 00:09:24.578 ************************************ 00:09:24.578 12:35:22 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:24.578 * Looking for test storage... 00:09:24.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:24.578 12:35:22 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:24.578 12:35:22 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:24.578 12:35:22 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:24.578 12:35:22 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:24.578 12:35:22 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.578 12:35:22 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.578 12:35:22 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.578 12:35:22 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.578 12:35:22 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.578 12:35:22 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.578 12:35:22 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.578 12:35:22 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.578 12:35:22 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.578 12:35:22 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.578 12:35:22 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.578 12:35:22 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:24.578 12:35:22 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:24.578 12:35:22 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:24.579 12:35:22 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.579 12:35:22 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:24.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.579 --rc genhtml_branch_coverage=1 00:09:24.579 --rc genhtml_function_coverage=1 00:09:24.579 --rc genhtml_legend=1 00:09:24.579 --rc geninfo_all_blocks=1 00:09:24.579 --rc geninfo_unexecuted_blocks=1 00:09:24.579 00:09:24.579 ' 00:09:24.579 12:35:22 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:24.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.579 --rc genhtml_branch_coverage=1 00:09:24.579 --rc genhtml_function_coverage=1 00:09:24.579 --rc genhtml_legend=1 00:09:24.579 --rc geninfo_all_blocks=1 00:09:24.579 --rc geninfo_unexecuted_blocks=1 00:09:24.579 00:09:24.579 ' 00:09:24.579 12:35:22 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:24.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.579 --rc genhtml_branch_coverage=1 00:09:24.579 --rc genhtml_function_coverage=1 00:09:24.579 --rc genhtml_legend=1 00:09:24.579 --rc geninfo_all_blocks=1 00:09:24.579 --rc geninfo_unexecuted_blocks=1 00:09:24.579 00:09:24.579 ' 00:09:24.579 12:35:22 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:24.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.579 --rc genhtml_branch_coverage=1 00:09:24.579 --rc genhtml_function_coverage=1 00:09:24.579 --rc genhtml_legend=1 00:09:24.579 --rc geninfo_all_blocks=1 00:09:24.579 --rc geninfo_unexecuted_blocks=1 00:09:24.579 00:09:24.579 ' 00:09:24.579 12:35:22 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:24.579 12:35:22 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:24.579 12:35:22 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:24.579 12:35:22 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:24.579 12:35:22 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.579 12:35:22 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.579 12:35:22 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.579 12:35:22 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.579 12:35:22 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.579 12:35:22 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:24.579 12:35:22 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.579 12:35:22 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:24.579 12:35:22 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:24.579 12:35:22 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:24.579 12:35:22 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:24.579 12:35:22 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:24.579 12:35:22 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:24.579 12:35:22 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:24.579 12:35:22 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:24.579 12:35:22 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:24.579 12:35:22 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.579 12:35:22 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:24.579 12:35:22 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:24.579 12:35:22 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:24.579 12:35:22 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:24.579 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.579 Waiting for block devices as requested 00:09:24.579 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.579 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.579 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.579 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.799 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:28.799 12:35:28 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:28.799 12:35:28 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:28.799 12:35:28 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:28.799 12:35:28 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.799 12:35:28 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:28.799 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:28.800 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:28.801 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:28.802 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.803 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.804 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:28.805 12:35:28 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:28.805 12:35:28 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:28.805 12:35:28 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.805 12:35:28 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:28.805 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.806 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.807 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.808 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.809 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.810 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:28.811 12:35:28 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:28.811 12:35:28 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:28.811 12:35:28 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:28.811 12:35:28 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.811 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:28.812 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.813 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.814 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.815 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.816 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.817 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:28.818 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:28.819 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:28.820 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.084 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.085 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:29.086 12:35:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:29.087 12:35:28 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:29.087 12:35:28 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:29.087 12:35:28 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:29.087 12:35:28 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.087 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.088 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.089 12:35:28 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:29.090 12:35:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:29.090 12:35:28 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:29.090 12:35:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:29.090 12:35:28 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:29.090 12:35:28 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:29.663 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:30.234 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:30.234 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:30.234 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:30.234 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:30.234 12:35:29 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:30.234 12:35:29 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:30.234 12:35:29 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.234 12:35:29 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 ************************************ 00:09:30.234 START TEST nvme_simple_copy 00:09:30.234 ************************************ 00:09:30.235 12:35:29 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:30.495 Initializing NVMe Controllers 00:09:30.495 Attaching to 0000:00:10.0 00:09:30.495 Controller supports SCC. Attached to 0000:00:10.0 00:09:30.495 Namespace ID: 1 size: 6GB 00:09:30.495 Initialization complete. 00:09:30.495 00:09:30.495 Controller QEMU NVMe Ctrl (12340 ) 00:09:30.495 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:30.495 Namespace Block Size:4096 00:09:30.495 Writing LBAs 0 to 63 with Random Data 00:09:30.495 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:30.495 LBAs matching Written Data: 64 00:09:30.495 00:09:30.495 real 0m0.301s 00:09:30.495 user 0m0.105s 00:09:30.495 sys 0m0.094s 00:09:30.495 12:35:30 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.495 ************************************ 00:09:30.495 END TEST nvme_simple_copy 00:09:30.495 ************************************ 00:09:30.495 12:35:30 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:30.495 00:09:30.495 real 0m7.840s 00:09:30.495 user 0m1.145s 00:09:30.495 sys 0m1.494s 00:09:30.495 12:35:30 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.496 ************************************ 00:09:30.496 END TEST nvme_scc 00:09:30.496 ************************************ 00:09:30.496 12:35:30 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:30.757 12:35:30 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:30.757 12:35:30 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:30.757 12:35:30 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:30.757 12:35:30 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:30.757 12:35:30 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:30.757 12:35:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:30.757 12:35:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.757 12:35:30 -- common/autotest_common.sh@10 -- # set +x 00:09:30.757 ************************************ 00:09:30.757 START TEST nvme_fdp 00:09:30.757 ************************************ 00:09:30.757 12:35:30 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:30.757 * Looking for test storage... 00:09:30.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:30.757 12:35:30 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.757 12:35:30 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.757 12:35:30 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.757 12:35:30 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:30.757 12:35:30 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.757 12:35:30 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.757 --rc genhtml_branch_coverage=1 00:09:30.757 --rc genhtml_function_coverage=1 00:09:30.757 --rc genhtml_legend=1 00:09:30.757 --rc geninfo_all_blocks=1 00:09:30.757 --rc geninfo_unexecuted_blocks=1 00:09:30.757 00:09:30.757 ' 00:09:30.757 12:35:30 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.757 --rc genhtml_branch_coverage=1 00:09:30.757 --rc genhtml_function_coverage=1 00:09:30.757 --rc genhtml_legend=1 00:09:30.757 --rc geninfo_all_blocks=1 00:09:30.757 --rc geninfo_unexecuted_blocks=1 00:09:30.757 00:09:30.757 ' 00:09:30.757 12:35:30 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.757 --rc genhtml_branch_coverage=1 00:09:30.757 --rc genhtml_function_coverage=1 00:09:30.757 --rc genhtml_legend=1 00:09:30.757 --rc geninfo_all_blocks=1 00:09:30.757 --rc geninfo_unexecuted_blocks=1 00:09:30.757 00:09:30.757 ' 00:09:30.757 12:35:30 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.757 --rc genhtml_branch_coverage=1 00:09:30.757 --rc genhtml_function_coverage=1 00:09:30.757 --rc genhtml_legend=1 00:09:30.757 --rc geninfo_all_blocks=1 00:09:30.757 --rc geninfo_unexecuted_blocks=1 00:09:30.757 00:09:30.757 ' 00:09:30.757 12:35:30 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:30.757 12:35:30 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:30.757 12:35:30 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:30.757 12:35:30 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:30.757 12:35:30 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.757 12:35:30 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.758 12:35:30 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.758 12:35:30 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.758 12:35:30 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.758 12:35:30 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.758 12:35:30 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.758 12:35:30 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:30.758 12:35:30 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.758 12:35:30 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:30.758 12:35:30 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:30.758 12:35:30 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:30.758 12:35:30 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:30.758 12:35:30 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:30.758 12:35:30 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:30.758 12:35:30 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:30.758 12:35:30 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:30.758 12:35:30 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:30.758 12:35:30 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.758 12:35:30 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:31.330 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.330 Waiting for block devices as requested 00:09:31.330 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.330 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.591 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.591 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:36.943 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:36.943 12:35:36 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:36.943 12:35:36 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.943 12:35:36 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:36.943 12:35:36 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.943 12:35:36 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.943 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:36.944 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.945 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:36.946 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.947 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:36.948 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:36.949 12:35:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:36.949 12:35:36 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.949 12:35:36 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:36.949 12:35:36 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.950 12:35:36 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:36.950 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.951 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:36.952 12:35:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.953 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.954 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.955 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:36.956 12:35:36 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:36.956 12:35:36 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:36.956 12:35:36 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:36.956 12:35:36 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:36.956 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.957 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.958 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.959 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:36.960 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.961 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.962 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:36.963 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.964 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.965 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.966 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:36.967 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:36.968 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:37.230 12:35:36 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:37.230 12:35:36 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:37.230 12:35:36 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:37.230 12:35:36 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.230 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.231 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.232 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:37.233 12:35:36 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:37.233 12:35:36 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:37.234 12:35:36 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:37.234 12:35:36 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:37.234 12:35:36 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:37.234 12:35:36 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:37.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:38.064 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.325 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.325 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.325 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:38.325 12:35:37 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:38.325 12:35:37 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:38.325 12:35:37 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.325 12:35:37 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:38.325 ************************************ 00:09:38.325 START TEST nvme_flexible_data_placement 00:09:38.325 ************************************ 00:09:38.325 12:35:37 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:38.586 Initializing NVMe Controllers 00:09:38.586 Attaching to 0000:00:13.0 00:09:38.586 Controller supports FDP Attached to 0000:00:13.0 00:09:38.586 Namespace ID: 1 Endurance Group ID: 1 00:09:38.586 Initialization complete. 00:09:38.586 00:09:38.586 ================================== 00:09:38.586 == FDP tests for Namespace: #01 == 00:09:38.586 ================================== 00:09:38.586 00:09:38.586 Get Feature: FDP: 00:09:38.586 ================= 00:09:38.586 Enabled: Yes 00:09:38.586 FDP configuration Index: 0 00:09:38.586 00:09:38.586 FDP configurations log page 00:09:38.586 =========================== 00:09:38.586 Number of FDP configurations: 1 00:09:38.586 Version: 0 00:09:38.586 Size: 112 00:09:38.586 FDP Configuration Descriptor: 0 00:09:38.586 Descriptor Size: 96 00:09:38.586 Reclaim Group Identifier format: 2 00:09:38.586 FDP Volatile Write Cache: Not Present 00:09:38.586 FDP Configuration: Valid 00:09:38.586 Vendor Specific Size: 0 00:09:38.586 Number of Reclaim Groups: 2 00:09:38.586 Number of Recalim Unit Handles: 8 00:09:38.586 Max Placement Identifiers: 128 00:09:38.586 Number of Namespaces Suppprted: 256 00:09:38.586 Reclaim unit Nominal Size: 6000000 bytes 00:09:38.586 Estimated Reclaim Unit Time Limit: Not Reported 00:09:38.586 RUH Desc #000: RUH Type: Initially Isolated 00:09:38.586 RUH Desc #001: RUH Type: Initially Isolated 00:09:38.586 RUH Desc #002: RUH Type: Initially Isolated 00:09:38.586 RUH Desc #003: RUH Type: Initially Isolated 00:09:38.586 RUH Desc #004: RUH Type: Initially Isolated 00:09:38.586 RUH Desc #005: RUH Type: Initially Isolated 00:09:38.586 RUH Desc #006: RUH Type: Initially Isolated 00:09:38.586 RUH Desc #007: RUH Type: Initially Isolated 00:09:38.586 00:09:38.586 FDP reclaim unit handle usage log page 00:09:38.586 ====================================== 00:09:38.586 Number of Reclaim Unit Handles: 8 00:09:38.586 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:38.586 RUH Usage Desc #001: RUH Attributes: Unused 00:09:38.586 RUH Usage Desc #002: RUH Attributes: Unused 00:09:38.586 RUH Usage Desc #003: RUH Attributes: Unused 00:09:38.586 RUH Usage Desc #004: RUH Attributes: Unused 00:09:38.586 RUH Usage Desc #005: RUH Attributes: Unused 00:09:38.586 RUH Usage Desc #006: RUH Attributes: Unused 00:09:38.586 RUH Usage Desc #007: RUH Attributes: Unused 00:09:38.586 00:09:38.586 FDP statistics log page 00:09:38.586 ======================= 00:09:38.586 Host bytes with metadata written: 938921984 00:09:38.586 Media bytes with metadata written: 939077632 00:09:38.586 Media bytes erased: 0 00:09:38.586 00:09:38.586 FDP Reclaim unit handle status 00:09:38.586 ============================== 00:09:38.586 Number of RUHS descriptors: 2 00:09:38.586 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004093 00:09:38.586 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:38.586 00:09:38.586 FDP write on placement id: 0 success 00:09:38.586 00:09:38.586 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:38.586 00:09:38.586 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:38.586 00:09:38.586 Get Feature: FDP Events for Placement handle: #0 00:09:38.586 ======================== 00:09:38.586 Number of FDP Events: 6 00:09:38.586 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:38.586 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:38.586 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:38.586 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:38.586 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:38.586 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:38.586 00:09:38.586 FDP events log page 00:09:38.586 =================== 00:09:38.586 Number of FDP events: 1 00:09:38.586 FDP Event #0: 00:09:38.586 Event Type: RU Not Written to Capacity 00:09:38.586 Placement Identifier: Valid 00:09:38.586 NSID: Valid 00:09:38.586 Location: Valid 00:09:38.586 Placement Identifier: 0 00:09:38.586 Event Timestamp: f 00:09:38.586 Namespace Identifier: 1 00:09:38.586 Reclaim Group Identifier: 0 00:09:38.586 Reclaim Unit Handle Identifier: 0 00:09:38.586 00:09:38.586 FDP test passed 00:09:38.586 00:09:38.586 real 0m0.244s 00:09:38.586 user 0m0.079s 00:09:38.586 sys 0m0.062s 00:09:38.586 12:35:38 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.586 ************************************ 00:09:38.586 END TEST nvme_flexible_data_placement 00:09:38.586 ************************************ 00:09:38.586 12:35:38 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:38.586 00:09:38.586 real 0m7.987s 00:09:38.586 user 0m1.147s 00:09:38.586 sys 0m1.589s 00:09:38.586 ************************************ 00:09:38.586 END TEST nvme_fdp 00:09:38.586 ************************************ 00:09:38.586 12:35:38 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.586 12:35:38 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:38.586 12:35:38 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:38.586 12:35:38 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:38.586 12:35:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:38.586 12:35:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.586 12:35:38 -- common/autotest_common.sh@10 -- # set +x 00:09:38.586 ************************************ 00:09:38.586 START TEST nvme_rpc 00:09:38.586 ************************************ 00:09:38.586 12:35:38 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:38.848 * Looking for test storage... 00:09:38.848 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.848 12:35:38 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:38.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.848 --rc genhtml_branch_coverage=1 00:09:38.848 --rc genhtml_function_coverage=1 00:09:38.848 --rc genhtml_legend=1 00:09:38.848 --rc geninfo_all_blocks=1 00:09:38.848 --rc geninfo_unexecuted_blocks=1 00:09:38.848 00:09:38.848 ' 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:38.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.848 --rc genhtml_branch_coverage=1 00:09:38.848 --rc genhtml_function_coverage=1 00:09:38.848 --rc genhtml_legend=1 00:09:38.848 --rc geninfo_all_blocks=1 00:09:38.848 --rc geninfo_unexecuted_blocks=1 00:09:38.848 00:09:38.848 ' 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:38.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.848 --rc genhtml_branch_coverage=1 00:09:38.848 --rc genhtml_function_coverage=1 00:09:38.848 --rc genhtml_legend=1 00:09:38.848 --rc geninfo_all_blocks=1 00:09:38.848 --rc geninfo_unexecuted_blocks=1 00:09:38.848 00:09:38.848 ' 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:38.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.848 --rc genhtml_branch_coverage=1 00:09:38.848 --rc genhtml_function_coverage=1 00:09:38.848 --rc genhtml_legend=1 00:09:38.848 --rc geninfo_all_blocks=1 00:09:38.848 --rc geninfo_unexecuted_blocks=1 00:09:38.848 00:09:38.848 ' 00:09:38.848 12:35:38 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:38.848 12:35:38 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:38.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.848 12:35:38 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:38.848 12:35:38 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67468 00:09:38.848 12:35:38 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:38.848 12:35:38 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:38.848 12:35:38 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67468 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67468 ']' 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.848 12:35:38 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.109 [2024-12-14 12:35:38.615848] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:39.109 [2024-12-14 12:35:38.615984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67468 ] 00:09:39.109 [2024-12-14 12:35:38.781722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:39.370 [2024-12-14 12:35:38.929819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.370 [2024-12-14 12:35:38.929934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.313 12:35:39 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.313 12:35:39 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:40.313 12:35:39 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:40.313 Nvme0n1 00:09:40.313 12:35:40 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:40.313 12:35:40 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:40.574 request: 00:09:40.574 { 00:09:40.574 "bdev_name": "Nvme0n1", 00:09:40.574 "filename": "non_existing_file", 00:09:40.574 "method": "bdev_nvme_apply_firmware", 00:09:40.574 "req_id": 1 00:09:40.574 } 00:09:40.574 Got JSON-RPC error response 00:09:40.574 response: 00:09:40.574 { 00:09:40.574 "code": -32603, 00:09:40.574 "message": "open file failed." 00:09:40.574 } 00:09:40.574 12:35:40 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:40.574 12:35:40 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:40.574 12:35:40 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:40.835 12:35:40 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:40.835 12:35:40 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67468 00:09:40.835 12:35:40 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67468 ']' 00:09:40.835 12:35:40 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67468 00:09:40.835 12:35:40 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:40.835 12:35:40 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.835 12:35:40 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67468 00:09:40.835 killing process with pid 67468 00:09:40.835 12:35:40 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.835 12:35:40 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.835 12:35:40 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67468' 00:09:40.835 12:35:40 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67468 00:09:40.835 12:35:40 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67468 00:09:42.738 ************************************ 00:09:42.738 END TEST nvme_rpc 00:09:42.738 ************************************ 00:09:42.738 00:09:42.738 real 0m3.621s 00:09:42.738 user 0m6.688s 00:09:42.738 sys 0m0.721s 00:09:42.738 12:35:41 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.738 12:35:41 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.738 12:35:41 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:42.738 12:35:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:42.738 12:35:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.738 12:35:41 -- common/autotest_common.sh@10 -- # set +x 00:09:42.738 ************************************ 00:09:42.738 START TEST nvme_rpc_timeouts 00:09:42.738 ************************************ 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:42.738 * Looking for test storage... 00:09:42.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:42.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.738 12:35:42 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:42.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.738 --rc genhtml_branch_coverage=1 00:09:42.738 --rc genhtml_function_coverage=1 00:09:42.738 --rc genhtml_legend=1 00:09:42.738 --rc geninfo_all_blocks=1 00:09:42.738 --rc geninfo_unexecuted_blocks=1 00:09:42.738 00:09:42.738 ' 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:42.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.738 --rc genhtml_branch_coverage=1 00:09:42.738 --rc genhtml_function_coverage=1 00:09:42.738 --rc genhtml_legend=1 00:09:42.738 --rc geninfo_all_blocks=1 00:09:42.738 --rc geninfo_unexecuted_blocks=1 00:09:42.738 00:09:42.738 ' 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:42.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.738 --rc genhtml_branch_coverage=1 00:09:42.738 --rc genhtml_function_coverage=1 00:09:42.738 --rc genhtml_legend=1 00:09:42.738 --rc geninfo_all_blocks=1 00:09:42.738 --rc geninfo_unexecuted_blocks=1 00:09:42.738 00:09:42.738 ' 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:42.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.738 --rc genhtml_branch_coverage=1 00:09:42.738 --rc genhtml_function_coverage=1 00:09:42.738 --rc genhtml_legend=1 00:09:42.738 --rc geninfo_all_blocks=1 00:09:42.738 --rc geninfo_unexecuted_blocks=1 00:09:42.738 00:09:42.738 ' 00:09:42.738 12:35:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.738 12:35:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67533 00:09:42.738 12:35:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67533 00:09:42.738 12:35:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67565 00:09:42.738 12:35:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:42.738 12:35:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67565 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67565 ']' 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.738 12:35:42 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:42.738 12:35:42 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:42.738 [2024-12-14 12:35:42.261257] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:42.738 [2024-12-14 12:35:42.261404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67565 ] 00:09:42.738 [2024-12-14 12:35:42.423369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:43.000 [2024-12-14 12:35:42.548493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.000 [2024-12-14 12:35:42.548561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.573 Checking default timeout settings: 00:09:43.573 12:35:43 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.573 12:35:43 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:43.573 12:35:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:43.573 12:35:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:44.145 12:35:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:44.145 Making settings changes with rpc: 00:09:44.145 12:35:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:44.145 12:35:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:44.145 Check default vs. modified settings: 00:09:44.145 12:35:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67533 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67533 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.717 Setting action_on_timeout is changed as expected. 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67533 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67533 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.717 Setting timeout_us is changed as expected. 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67533 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67533 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.717 Setting timeout_admin_us is changed as expected. 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:44.717 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:44.718 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:44.718 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:44.718 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67533 /tmp/settings_modified_67533 00:09:44.718 12:35:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67565 00:09:44.718 12:35:44 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67565 ']' 00:09:44.718 12:35:44 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67565 00:09:44.718 12:35:44 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:44.718 12:35:44 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.718 12:35:44 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67565 00:09:44.718 killing process with pid 67565 00:09:44.718 12:35:44 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.718 12:35:44 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.718 12:35:44 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67565' 00:09:44.718 12:35:44 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67565 00:09:44.718 12:35:44 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67565 00:09:46.093 RPC TIMEOUT SETTING TEST PASSED. 00:09:46.093 12:35:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:46.093 ************************************ 00:09:46.093 END TEST nvme_rpc_timeouts 00:09:46.093 ************************************ 00:09:46.093 00:09:46.093 real 0m3.642s 00:09:46.093 user 0m6.958s 00:09:46.093 sys 0m0.673s 00:09:46.093 12:35:45 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.093 12:35:45 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:46.093 12:35:45 -- spdk/autotest.sh@239 -- # uname -s 00:09:46.093 12:35:45 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:46.093 12:35:45 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:46.093 12:35:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.093 12:35:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.093 12:35:45 -- common/autotest_common.sh@10 -- # set +x 00:09:46.093 ************************************ 00:09:46.093 START TEST sw_hotplug 00:09:46.093 ************************************ 00:09:46.093 12:35:45 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:46.093 * Looking for test storage... 00:09:46.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:46.093 12:35:45 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:46.093 12:35:45 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:46.093 12:35:45 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:46.354 12:35:45 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.354 12:35:45 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:46.354 12:35:45 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.354 12:35:45 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:46.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.354 --rc genhtml_branch_coverage=1 00:09:46.354 --rc genhtml_function_coverage=1 00:09:46.354 --rc genhtml_legend=1 00:09:46.354 --rc geninfo_all_blocks=1 00:09:46.354 --rc geninfo_unexecuted_blocks=1 00:09:46.354 00:09:46.354 ' 00:09:46.354 12:35:45 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:46.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.354 --rc genhtml_branch_coverage=1 00:09:46.354 --rc genhtml_function_coverage=1 00:09:46.354 --rc genhtml_legend=1 00:09:46.354 --rc geninfo_all_blocks=1 00:09:46.354 --rc geninfo_unexecuted_blocks=1 00:09:46.354 00:09:46.354 ' 00:09:46.354 12:35:45 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:46.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.354 --rc genhtml_branch_coverage=1 00:09:46.354 --rc genhtml_function_coverage=1 00:09:46.354 --rc genhtml_legend=1 00:09:46.354 --rc geninfo_all_blocks=1 00:09:46.354 --rc geninfo_unexecuted_blocks=1 00:09:46.354 00:09:46.354 ' 00:09:46.354 12:35:45 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:46.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.354 --rc genhtml_branch_coverage=1 00:09:46.354 --rc genhtml_function_coverage=1 00:09:46.354 --rc genhtml_legend=1 00:09:46.354 --rc geninfo_all_blocks=1 00:09:46.354 --rc geninfo_unexecuted_blocks=1 00:09:46.354 00:09:46.354 ' 00:09:46.354 12:35:45 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:46.615 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:46.615 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.615 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.615 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.615 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.615 12:35:46 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:46.615 12:35:46 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:46.615 12:35:46 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:46.615 12:35:46 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:46.615 12:35:46 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:46.615 12:35:46 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:46.615 12:35:46 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:46.615 12:35:46 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:46.615 12:35:46 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:46.615 12:35:46 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:46.876 12:35:46 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:46.876 12:35:46 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:46.876 12:35:46 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:46.876 12:35:46 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:47.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.394 Waiting for block devices as requested 00:09:47.394 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.394 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.394 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.394 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.685 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:52.686 12:35:52 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:52.686 12:35:52 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:52.947 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:52.947 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:52.947 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:53.520 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:53.520 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.520 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.781 12:35:53 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:53.781 12:35:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:53.781 12:35:53 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:53.781 12:35:53 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:53.781 12:35:53 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68428 00:09:53.781 12:35:53 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:53.781 12:35:53 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:53.781 12:35:53 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:53.781 12:35:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:53.781 12:35:53 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:53.781 12:35:53 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:53.781 12:35:53 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:53.781 12:35:53 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:53.781 12:35:53 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:53.781 12:35:53 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:53.781 12:35:53 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:53.781 12:35:53 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:53.781 12:35:53 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:53.781 12:35:53 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:54.042 Initializing NVMe Controllers 00:09:54.042 Attaching to 0000:00:10.0 00:09:54.042 Attaching to 0000:00:11.0 00:09:54.042 Attached to 0000:00:11.0 00:09:54.042 Attached to 0000:00:10.0 00:09:54.042 Initialization complete. Starting I/O... 00:09:54.042 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:54.042 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:54.042 00:09:55.044 QEMU NVMe Ctrl (12341 ): 2484 I/Os completed (+2484) 00:09:55.045 QEMU NVMe Ctrl (12340 ): 2552 I/Os completed (+2552) 00:09:55.045 00:09:55.987 QEMU NVMe Ctrl (12341 ): 5571 I/Os completed (+3087) 00:09:55.987 QEMU NVMe Ctrl (12340 ): 5653 I/Os completed (+3101) 00:09:55.987 00:09:56.931 QEMU NVMe Ctrl (12341 ): 8279 I/Os completed (+2708) 00:09:56.931 QEMU NVMe Ctrl (12340 ): 8357 I/Os completed (+2704) 00:09:56.931 00:09:58.309 QEMU NVMe Ctrl (12341 ): 11414 I/Os completed (+3135) 00:09:58.309 QEMU NVMe Ctrl (12340 ): 11444 I/Os completed (+3087) 00:09:58.309 00:09:58.881 QEMU NVMe Ctrl (12341 ): 14861 I/Os completed (+3447) 00:09:58.881 QEMU NVMe Ctrl (12340 ): 14871 I/Os completed (+3427) 00:09:58.881 00:09:59.824 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:59.824 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.824 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.824 [2024-12-14 12:35:59.410666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:59.824 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:59.824 [2024-12-14 12:35:59.414429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 [2024-12-14 12:35:59.414560] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 [2024-12-14 12:35:59.414607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 [2024-12-14 12:35:59.414658] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:59.824 [2024-12-14 12:35:59.418911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 [2024-12-14 12:35:59.418958] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 [2024-12-14 12:35:59.418972] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 [2024-12-14 12:35:59.418988] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.824 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.824 [2024-12-14 12:35:59.432559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:59.824 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:59.824 [2024-12-14 12:35:59.433632] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 [2024-12-14 12:35:59.433668] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 [2024-12-14 12:35:59.433686] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 [2024-12-14 12:35:59.433700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:59.824 [2024-12-14 12:35:59.435390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 [2024-12-14 12:35:59.435421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 [2024-12-14 12:35:59.435435] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 [2024-12-14 12:35:59.435448] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.824 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:59.824 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:59.824 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:59.824 EAL: Scan for (pci) bus failed. 00:10:00.086 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.086 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.086 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:00.086 00:10:00.086 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:00.086 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.086 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:00.086 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:00.086 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:00.086 Attaching to 0000:00:10.0 00:10:00.086 Attached to 0000:00:10.0 00:10:00.086 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:00.087 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.087 12:35:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:00.087 Attaching to 0000:00:11.0 00:10:00.087 Attached to 0000:00:11.0 00:10:01.029 QEMU NVMe Ctrl (12340 ): 2747 I/Os completed (+2747) 00:10:01.029 QEMU NVMe Ctrl (12341 ): 2479 I/Os completed (+2479) 00:10:01.029 00:10:01.970 QEMU NVMe Ctrl (12340 ): 5459 I/Os completed (+2712) 00:10:01.970 QEMU NVMe Ctrl (12341 ): 5193 I/Os completed (+2714) 00:10:01.970 00:10:02.910 QEMU NVMe Ctrl (12340 ): 8287 I/Os completed (+2828) 00:10:02.910 QEMU NVMe Ctrl (12341 ): 8024 I/Os completed (+2831) 00:10:02.910 00:10:04.293 QEMU NVMe Ctrl (12340 ): 10823 I/Os completed (+2536) 00:10:04.293 QEMU NVMe Ctrl (12341 ): 10564 I/Os completed (+2540) 00:10:04.293 00:10:05.236 QEMU NVMe Ctrl (12340 ): 13525 I/Os completed (+2702) 00:10:05.236 QEMU NVMe Ctrl (12341 ): 13274 I/Os completed (+2710) 00:10:05.236 00:10:06.178 QEMU NVMe Ctrl (12340 ): 17234 I/Os completed (+3709) 00:10:06.178 QEMU NVMe Ctrl (12341 ): 16965 I/Os completed (+3691) 00:10:06.178 00:10:07.121 QEMU NVMe Ctrl (12340 ): 21212 I/Os completed (+3978) 00:10:07.121 QEMU NVMe Ctrl (12341 ): 20938 I/Os completed (+3973) 00:10:07.121 00:10:08.065 QEMU NVMe Ctrl (12340 ): 25020 I/Os completed (+3808) 00:10:08.065 QEMU NVMe Ctrl (12341 ): 24742 I/Os completed (+3804) 00:10:08.065 00:10:09.007 QEMU NVMe Ctrl (12340 ): 28096 I/Os completed (+3076) 00:10:09.007 QEMU NVMe Ctrl (12341 ): 27825 I/Os completed (+3083) 00:10:09.007 00:10:09.950 QEMU NVMe Ctrl (12340 ): 30864 I/Os completed (+2768) 00:10:09.950 QEMU NVMe Ctrl (12341 ): 30596 I/Os completed (+2771) 00:10:09.950 00:10:10.892 QEMU NVMe Ctrl (12340 ): 34832 I/Os completed (+3968) 00:10:10.892 QEMU NVMe Ctrl (12341 ): 34551 I/Os completed (+3955) 00:10:10.892 00:10:12.279 QEMU NVMe Ctrl (12340 ): 38674 I/Os completed (+3842) 00:10:12.279 QEMU NVMe Ctrl (12341 ): 38387 I/Os completed (+3836) 00:10:12.279 00:10:12.279 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:12.279 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:12.279 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.279 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.279 [2024-12-14 12:36:11.756871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:12.279 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:12.279 [2024-12-14 12:36:11.757836] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.279 [2024-12-14 12:36:11.757874] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.279 [2024-12-14 12:36:11.757890] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.279 [2024-12-14 12:36:11.757905] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.279 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:12.279 [2024-12-14 12:36:11.759480] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.279 [2024-12-14 12:36:11.759521] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.279 [2024-12-14 12:36:11.759533] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.279 [2024-12-14 12:36:11.759544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.279 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:12.279 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:12.279 [2024-12-14 12:36:11.779040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:12.279 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:12.279 [2024-12-14 12:36:11.780050] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.280 [2024-12-14 12:36:11.780090] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.280 [2024-12-14 12:36:11.780107] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.280 [2024-12-14 12:36:11.780119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.280 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:12.280 [2024-12-14 12:36:11.781458] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.280 [2024-12-14 12:36:11.781489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.280 [2024-12-14 12:36:11.781501] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.280 [2024-12-14 12:36:11.781512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:12.280 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:12.280 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:12.280 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:12.280 EAL: Scan for (pci) bus failed. 00:10:12.280 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.280 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.280 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:12.280 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:12.280 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.280 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:12.280 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:12.280 12:36:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:12.280 Attaching to 0000:00:10.0 00:10:12.280 Attached to 0000:00:10.0 00:10:12.280 12:36:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:12.541 12:36:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:12.541 12:36:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:12.541 Attaching to 0000:00:11.0 00:10:12.541 Attached to 0000:00:11.0 00:10:13.114 QEMU NVMe Ctrl (12340 ): 1868 I/Os completed (+1868) 00:10:13.114 QEMU NVMe Ctrl (12341 ): 1621 I/Os completed (+1621) 00:10:13.114 00:10:14.057 QEMU NVMe Ctrl (12340 ): 5370 I/Os completed (+3502) 00:10:14.058 QEMU NVMe Ctrl (12341 ): 5121 I/Os completed (+3500) 00:10:14.058 00:10:15.000 QEMU NVMe Ctrl (12340 ): 9134 I/Os completed (+3764) 00:10:15.000 QEMU NVMe Ctrl (12341 ): 8875 I/Os completed (+3754) 00:10:15.000 00:10:15.941 QEMU NVMe Ctrl (12340 ): 12931 I/Os completed (+3797) 00:10:15.941 QEMU NVMe Ctrl (12341 ): 12664 I/Os completed (+3789) 00:10:15.941 00:10:16.884 QEMU NVMe Ctrl (12340 ): 15823 I/Os completed (+2892) 00:10:16.884 QEMU NVMe Ctrl (12341 ): 15566 I/Os completed (+2902) 00:10:16.884 00:10:18.296 QEMU NVMe Ctrl (12340 ): 18791 I/Os completed (+2968) 00:10:18.296 QEMU NVMe Ctrl (12341 ): 18526 I/Os completed (+2960) 00:10:18.296 00:10:19.234 QEMU NVMe Ctrl (12340 ): 22732 I/Os completed (+3941) 00:10:19.234 QEMU NVMe Ctrl (12341 ): 22477 I/Os completed (+3951) 00:10:19.234 00:10:20.169 QEMU NVMe Ctrl (12340 ): 26525 I/Os completed (+3793) 00:10:20.169 QEMU NVMe Ctrl (12341 ): 26261 I/Os completed (+3784) 00:10:20.169 00:10:21.104 QEMU NVMe Ctrl (12340 ): 30200 I/Os completed (+3675) 00:10:21.104 QEMU NVMe Ctrl (12341 ): 29988 I/Os completed (+3727) 00:10:21.104 00:10:22.040 QEMU NVMe Ctrl (12340 ): 33968 I/Os completed (+3768) 00:10:22.040 QEMU NVMe Ctrl (12341 ): 33736 I/Os completed (+3748) 00:10:22.040 00:10:22.984 QEMU NVMe Ctrl (12340 ): 37874 I/Os completed (+3906) 00:10:22.984 QEMU NVMe Ctrl (12341 ): 37632 I/Os completed (+3896) 00:10:22.984 00:10:23.926 QEMU NVMe Ctrl (12340 ): 41686 I/Os completed (+3812) 00:10:23.926 QEMU NVMe Ctrl (12341 ): 41430 I/Os completed (+3798) 00:10:23.926 00:10:24.496 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:24.496 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:24.496 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.496 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.496 [2024-12-14 12:36:24.031774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:24.496 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:24.496 [2024-12-14 12:36:24.032815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.496 [2024-12-14 12:36:24.032929] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.496 [2024-12-14 12:36:24.032960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.496 [2024-12-14 12:36:24.033016] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.496 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:24.496 [2024-12-14 12:36:24.034552] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.496 [2024-12-14 12:36:24.034643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.497 [2024-12-14 12:36:24.034704] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.497 [2024-12-14 12:36:24.034729] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.497 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.497 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.497 [2024-12-14 12:36:24.053882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:24.497 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:24.497 [2024-12-14 12:36:24.054812] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.497 [2024-12-14 12:36:24.054870] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.497 [2024-12-14 12:36:24.054897] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.497 [2024-12-14 12:36:24.054921] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.497 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:24.497 [2024-12-14 12:36:24.056365] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.497 [2024-12-14 12:36:24.056414] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.497 [2024-12-14 12:36:24.056441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.497 [2024-12-14 12:36:24.056463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.497 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:24.497 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:24.497 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.497 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.497 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:24.497 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:24.497 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.497 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.497 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.497 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:24.497 Attaching to 0000:00:10.0 00:10:24.497 Attached to 0000:00:10.0 00:10:24.758 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:24.758 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.758 12:36:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:24.758 Attaching to 0000:00:11.0 00:10:24.758 Attached to 0000:00:11.0 00:10:24.758 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:24.758 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:24.758 [2024-12-14 12:36:24.306446] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:36.992 12:36:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:36.992 12:36:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.992 12:36:36 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.90 00:10:36.992 12:36:36 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.90 00:10:36.992 12:36:36 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:36.992 12:36:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.90 00:10:36.992 12:36:36 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.90 2 00:10:36.992 remove_attach_helper took 42.90s to complete (handling 2 nvme drive(s)) 12:36:36 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:43.578 12:36:42 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68428 00:10:43.578 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68428) - No such process 00:10:43.578 12:36:42 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68428 00:10:43.578 12:36:42 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:43.578 12:36:42 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:43.578 12:36:42 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:43.578 12:36:42 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68977 00:10:43.578 12:36:42 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:43.578 12:36:42 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68977 00:10:43.578 12:36:42 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68977 ']' 00:10:43.578 12:36:42 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.578 12:36:42 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.578 12:36:42 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.578 12:36:42 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.578 12:36:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:43.578 12:36:42 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:43.578 [2024-12-14 12:36:42.402964] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:43.578 [2024-12-14 12:36:42.403139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68977 ] 00:10:43.578 [2024-12-14 12:36:42.570538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.578 [2024-12-14 12:36:42.689149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.838 12:36:43 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.838 12:36:43 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:43.838 12:36:43 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:43.838 12:36:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.838 12:36:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:43.839 12:36:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.839 12:36:43 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:43.839 12:36:43 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:43.839 12:36:43 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:43.839 12:36:43 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:43.839 12:36:43 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:43.839 12:36:43 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:43.839 12:36:43 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:43.839 12:36:43 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:43.839 12:36:43 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:43.839 12:36:43 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:43.839 12:36:43 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:43.839 12:36:43 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:43.839 12:36:43 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:50.428 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:50.428 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.428 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.428 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:50.428 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:50.428 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:50.428 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.428 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.428 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.428 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.428 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.428 12:36:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.428 12:36:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.429 12:36:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.429 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:50.429 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:50.429 [2024-12-14 12:36:49.492088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:50.429 [2024-12-14 12:36:49.493273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.429 [2024-12-14 12:36:49.493390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.429 [2024-12-14 12:36:49.493407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.429 [2024-12-14 12:36:49.493425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.429 [2024-12-14 12:36:49.493433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.429 [2024-12-14 12:36:49.493441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.429 [2024-12-14 12:36:49.493448] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.429 [2024-12-14 12:36:49.493456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.429 [2024-12-14 12:36:49.493463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.429 [2024-12-14 12:36:49.493473] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.429 [2024-12-14 12:36:49.493480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.429 [2024-12-14 12:36:49.493487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.429 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:50.429 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.429 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.429 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.429 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.429 12:36:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.429 12:36:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.429 12:36:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.429 12:36:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.429 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:50.429 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:50.689 [2024-12-14 12:36:50.192080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:50.689 [2024-12-14 12:36:50.193239] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.689 [2024-12-14 12:36:50.193268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.689 [2024-12-14 12:36:50.193280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.690 [2024-12-14 12:36:50.193293] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.690 [2024-12-14 12:36:50.193301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.690 [2024-12-14 12:36:50.193308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.690 [2024-12-14 12:36:50.193317] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.690 [2024-12-14 12:36:50.193323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.690 [2024-12-14 12:36:50.193331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.690 [2024-12-14 12:36:50.193338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:50.690 [2024-12-14 12:36:50.193345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:50.690 [2024-12-14 12:36:50.193352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:50.950 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:50.950 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:50.950 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:50.950 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:50.950 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:50.950 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:50.950 12:36:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.950 12:36:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:50.950 12:36:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.950 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:50.950 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:50.950 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:50.950 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:50.950 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:51.211 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:51.211 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.211 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:51.211 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:51.211 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:51.211 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:51.211 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:51.211 12:36:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.443 12:37:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.443 12:37:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.443 12:37:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.443 12:37:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.443 12:37:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.443 12:37:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:03.443 12:37:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:03.443 [2024-12-14 12:37:02.892255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:03.443 [2024-12-14 12:37:02.893429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.443 [2024-12-14 12:37:02.893462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.443 [2024-12-14 12:37:02.893472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.443 [2024-12-14 12:37:02.893489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.443 [2024-12-14 12:37:02.893496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.444 [2024-12-14 12:37:02.893504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.444 [2024-12-14 12:37:02.893511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.444 [2024-12-14 12:37:02.893519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.444 [2024-12-14 12:37:02.893525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.444 [2024-12-14 12:37:02.893534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.444 [2024-12-14 12:37:02.893541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.444 [2024-12-14 12:37:02.893548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.704 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:03.704 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:03.704 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:03.704 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:03.704 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:03.704 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:03.704 12:37:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.704 12:37:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:03.704 12:37:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.704 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:03.704 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:03.965 [2024-12-14 12:37:03.592494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:03.965 [2024-12-14 12:37:03.593645] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.965 [2024-12-14 12:37:03.593677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.965 [2024-12-14 12:37:03.593689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.965 [2024-12-14 12:37:03.593703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.965 [2024-12-14 12:37:03.593712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.965 [2024-12-14 12:37:03.593718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.965 [2024-12-14 12:37:03.593727] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.965 [2024-12-14 12:37:03.593737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.965 [2024-12-14 12:37:03.593744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:03.965 [2024-12-14 12:37:03.593751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:03.965 [2024-12-14 12:37:03.593759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:03.965 [2024-12-14 12:37:03.593765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:04.226 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:04.226 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:04.226 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:04.226 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.226 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.226 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.226 12:37:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.226 12:37:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.226 12:37:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.226 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:04.226 12:37:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:04.487 12:37:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.487 12:37:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.487 12:37:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:04.487 12:37:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:04.487 12:37:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.487 12:37:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:04.487 12:37:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:04.487 12:37:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:04.487 12:37:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:04.487 12:37:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:04.487 12:37:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.721 12:37:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.721 12:37:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.721 12:37:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:16.721 12:37:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.721 12:37:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:16.721 12:37:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:16.721 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:16.721 [2024-12-14 12:37:16.292681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:16.721 [2024-12-14 12:37:16.293848] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.721 [2024-12-14 12:37:16.293882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.721 [2024-12-14 12:37:16.293893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.721 [2024-12-14 12:37:16.293911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.721 [2024-12-14 12:37:16.293918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.721 [2024-12-14 12:37:16.293927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.721 [2024-12-14 12:37:16.293934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.721 [2024-12-14 12:37:16.293942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.721 [2024-12-14 12:37:16.293949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:16.721 [2024-12-14 12:37:16.293957] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:16.721 [2024-12-14 12:37:16.293964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:16.721 [2024-12-14 12:37:16.293971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.293 [2024-12-14 12:37:16.792683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:17.293 [2024-12-14 12:37:16.793804] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.293 [2024-12-14 12:37:16.793918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.293 [2024-12-14 12:37:16.793933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.293 [2024-12-14 12:37:16.793946] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.293 [2024-12-14 12:37:16.793955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.293 [2024-12-14 12:37:16.793961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.293 [2024-12-14 12:37:16.793970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.293 [2024-12-14 12:37:16.793976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.293 [2024-12-14 12:37:16.793985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.293 [2024-12-14 12:37:16.793992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:17.293 [2024-12-14 12:37:16.794000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:17.293 [2024-12-14 12:37:16.794006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:17.293 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:17.293 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:17.293 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:17.293 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:17.293 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:17.293 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:17.293 12:37:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.293 12:37:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:17.293 12:37:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.293 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:17.293 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:17.293 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.293 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.293 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:17.293 12:37:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:17.293 12:37:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.293 12:37:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:17.293 12:37:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:17.293 12:37:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:17.553 12:37:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:17.553 12:37:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:17.553 12:37:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:29.787 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:29.787 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:29.787 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:29.787 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:29.787 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:29.788 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.788 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:29.788 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.73 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.73 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:29.788 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.73 00:11:29.788 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.73 2 00:11:29.788 remove_attach_helper took 45.73s to complete (handling 2 nvme drive(s)) 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.788 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.788 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:29.788 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:29.788 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:29.788 12:37:29 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:29.788 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:29.788 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:29.788 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:29.788 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:29.788 12:37:29 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:36.376 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.376 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.376 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.376 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.376 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.376 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:36.376 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.376 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.376 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.376 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.376 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.377 12:37:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.377 12:37:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.377 12:37:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.377 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:36.377 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.377 [2024-12-14 12:37:35.253940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:36.377 [2024-12-14 12:37:35.254905] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.377 [2024-12-14 12:37:35.255003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.377 [2024-12-14 12:37:35.255069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.377 [2024-12-14 12:37:35.255106] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.377 [2024-12-14 12:37:35.255124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.377 [2024-12-14 12:37:35.255199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.377 [2024-12-14 12:37:35.255226] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.377 [2024-12-14 12:37:35.255246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.377 [2024-12-14 12:37:35.255301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.377 [2024-12-14 12:37:35.255332] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.377 [2024-12-14 12:37:35.255348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.377 [2024-12-14 12:37:35.255402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.377 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:36.377 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.377 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.377 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.377 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.377 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.377 12:37:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.377 12:37:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.377 [2024-12-14 12:37:35.753934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:36.377 [2024-12-14 12:37:35.754889] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.377 [2024-12-14 12:37:35.754990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.377 [2024-12-14 12:37:35.755051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.377 [2024-12-14 12:37:35.755121] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.377 [2024-12-14 12:37:35.755142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.377 [2024-12-14 12:37:35.755190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.377 [2024-12-14 12:37:35.755218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.377 [2024-12-14 12:37:35.755234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.377 [2024-12-14 12:37:35.755285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.377 [2024-12-14 12:37:35.755311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.377 [2024-12-14 12:37:35.755327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.377 [2024-12-14 12:37:35.755374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.377 12:37:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.377 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:36.377 12:37:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.638 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:36.638 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.638 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.638 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.638 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.638 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.638 12:37:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.638 12:37:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.638 12:37:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.638 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:36.638 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.900 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.900 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.900 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.900 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.900 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.900 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.900 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.900 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:36.900 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:36.900 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.900 12:37:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.135 12:37:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.135 12:37:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.135 12:37:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.135 12:37:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.135 12:37:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.135 [2024-12-14 12:37:48.654175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:49.135 [2024-12-14 12:37:48.655138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.135 [2024-12-14 12:37:48.655228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.135 [2024-12-14 12:37:48.655588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.135 [2024-12-14 12:37:48.655671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.135 [2024-12-14 12:37:48.655710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.135 [2024-12-14 12:37:48.655738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.135 [2024-12-14 12:37:48.655764] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.135 [2024-12-14 12:37:48.655800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.135 [2024-12-14 12:37:48.655825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.135 [2024-12-14 12:37:48.655851] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.135 [2024-12-14 12:37:48.655867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.135 [2024-12-14 12:37:48.655893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.135 12:37:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:49.135 12:37:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:49.708 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:49.708 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:49.708 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:49.708 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.708 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.708 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.708 12:37:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.708 12:37:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.708 12:37:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.708 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:49.708 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:49.708 [2024-12-14 12:37:49.254171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:49.708 [2024-12-14 12:37:49.255101] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.708 [2024-12-14 12:37:49.255125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.708 [2024-12-14 12:37:49.255135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.708 [2024-12-14 12:37:49.255147] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.708 [2024-12-14 12:37:49.255157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.708 [2024-12-14 12:37:49.255164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.708 [2024-12-14 12:37:49.255173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.708 [2024-12-14 12:37:49.255179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.708 [2024-12-14 12:37:49.255187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:49.708 [2024-12-14 12:37:49.255194] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:49.708 [2024-12-14 12:37:49.255201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:49.708 [2024-12-14 12:37:49.255207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:50.280 12:37:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.280 12:37:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.280 12:37:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:50.280 12:37:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:02.540 12:38:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:02.540 12:38:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:02.540 12:38:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:02.540 12:38:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.540 12:38:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.540 12:38:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.540 12:38:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.540 12:38:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.540 12:38:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.540 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:02.540 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:02.540 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:02.540 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:02.540 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:02.540 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:02.540 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:02.540 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:02.540 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:02.540 [2024-12-14 12:38:02.054387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:02.540 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:02.540 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:02.540 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:02.540 [2024-12-14 12:38:02.055356] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.540 [2024-12-14 12:38:02.055379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.540 [2024-12-14 12:38:02.055389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.540 [2024-12-14 12:38:02.055406] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.540 [2024-12-14 12:38:02.055413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.540 [2024-12-14 12:38:02.055421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.540 [2024-12-14 12:38:02.055429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.540 [2024-12-14 12:38:02.055439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.540 [2024-12-14 12:38:02.055445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.540 [2024-12-14 12:38:02.055454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:02.540 [2024-12-14 12:38:02.055460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:02.540 [2024-12-14 12:38:02.055468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:02.540 12:38:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.540 12:38:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.540 12:38:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.540 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:02.540 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:03.111 [2024-12-14 12:38:02.554389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:03.111 [2024-12-14 12:38:02.555326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.111 [2024-12-14 12:38:02.555421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.111 [2024-12-14 12:38:02.555484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.111 [2024-12-14 12:38:02.555512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.111 [2024-12-14 12:38:02.555531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.111 [2024-12-14 12:38:02.555582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.111 [2024-12-14 12:38:02.555610] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.111 [2024-12-14 12:38:02.555626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.111 [2024-12-14 12:38:02.555676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.111 [2024-12-14 12:38:02.555703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.111 [2024-12-14 12:38:02.555722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.111 [2024-12-14 12:38:02.555777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:03.111 12:38:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.111 12:38:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:03.111 12:38:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:03.111 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:03.371 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:03.371 12:38:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:15.608 12:38:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:15.608 12:38:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:15.608 12:38:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:15.608 12:38:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:15.608 12:38:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:15.608 12:38:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.608 12:38:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:15.608 12:38:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.73 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.73 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:15.608 12:38:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.73 00:12:15.608 12:38:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.73 2 00:12:15.608 remove_attach_helper took 45.73s to complete (handling 2 nvme drive(s)) 12:38:14 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:15.608 12:38:14 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68977 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68977 ']' 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68977 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68977 00:12:15.608 killing process with pid 68977 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68977' 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68977 00:12:15.608 12:38:14 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68977 00:12:16.552 12:38:16 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:16.812 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:17.384 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:17.384 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:17.384 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.384 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:17.384 ************************************ 00:12:17.384 END TEST sw_hotplug 00:12:17.384 ************************************ 00:12:17.384 00:12:17.384 real 2m31.307s 00:12:17.384 user 1m52.698s 00:12:17.384 sys 0m17.215s 00:12:17.384 12:38:17 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.384 12:38:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:17.384 12:38:17 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:17.384 12:38:17 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:17.384 12:38:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:17.384 12:38:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.384 12:38:17 -- common/autotest_common.sh@10 -- # set +x 00:12:17.384 ************************************ 00:12:17.384 START TEST nvme_xnvme 00:12:17.384 ************************************ 00:12:17.384 12:38:17 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:17.648 * Looking for test storage... 00:12:17.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.648 12:38:17 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:17.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.648 --rc genhtml_branch_coverage=1 00:12:17.648 --rc genhtml_function_coverage=1 00:12:17.648 --rc genhtml_legend=1 00:12:17.648 --rc geninfo_all_blocks=1 00:12:17.648 --rc geninfo_unexecuted_blocks=1 00:12:17.648 00:12:17.648 ' 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:17.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.648 --rc genhtml_branch_coverage=1 00:12:17.648 --rc genhtml_function_coverage=1 00:12:17.648 --rc genhtml_legend=1 00:12:17.648 --rc geninfo_all_blocks=1 00:12:17.648 --rc geninfo_unexecuted_blocks=1 00:12:17.648 00:12:17.648 ' 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:17.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.648 --rc genhtml_branch_coverage=1 00:12:17.648 --rc genhtml_function_coverage=1 00:12:17.648 --rc genhtml_legend=1 00:12:17.648 --rc geninfo_all_blocks=1 00:12:17.648 --rc geninfo_unexecuted_blocks=1 00:12:17.648 00:12:17.648 ' 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:17.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.648 --rc genhtml_branch_coverage=1 00:12:17.648 --rc genhtml_function_coverage=1 00:12:17.648 --rc genhtml_legend=1 00:12:17.648 --rc geninfo_all_blocks=1 00:12:17.648 --rc geninfo_unexecuted_blocks=1 00:12:17.648 00:12:17.648 ' 00:12:17.648 12:38:17 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:17.648 12:38:17 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:17.648 12:38:17 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:17.648 12:38:17 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:17.649 12:38:17 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:17.649 12:38:17 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:17.649 #define SPDK_CONFIG_H 00:12:17.649 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:17.649 #define SPDK_CONFIG_APPS 1 00:12:17.649 #define SPDK_CONFIG_ARCH native 00:12:17.649 #define SPDK_CONFIG_ASAN 1 00:12:17.649 #undef SPDK_CONFIG_AVAHI 00:12:17.649 #undef SPDK_CONFIG_CET 00:12:17.649 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:17.649 #define SPDK_CONFIG_COVERAGE 1 00:12:17.649 #define SPDK_CONFIG_CROSS_PREFIX 00:12:17.649 #undef SPDK_CONFIG_CRYPTO 00:12:17.649 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:17.649 #undef SPDK_CONFIG_CUSTOMOCF 00:12:17.649 #undef SPDK_CONFIG_DAOS 00:12:17.649 #define SPDK_CONFIG_DAOS_DIR 00:12:17.649 #define SPDK_CONFIG_DEBUG 1 00:12:17.649 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:17.649 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:17.649 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:17.649 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:17.649 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:17.649 #undef SPDK_CONFIG_DPDK_UADK 00:12:17.649 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:17.649 #define SPDK_CONFIG_EXAMPLES 1 00:12:17.649 #undef SPDK_CONFIG_FC 00:12:17.649 #define SPDK_CONFIG_FC_PATH 00:12:17.649 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:17.649 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:17.649 #define SPDK_CONFIG_FSDEV 1 00:12:17.649 #undef SPDK_CONFIG_FUSE 00:12:17.649 #undef SPDK_CONFIG_FUZZER 00:12:17.649 #define SPDK_CONFIG_FUZZER_LIB 00:12:17.649 #undef SPDK_CONFIG_GOLANG 00:12:17.649 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:17.649 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:17.649 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:17.649 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:17.649 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:17.649 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:17.649 #undef SPDK_CONFIG_HAVE_LZ4 00:12:17.649 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:17.649 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:17.649 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:17.649 #define SPDK_CONFIG_IDXD 1 00:12:17.649 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:17.649 #undef SPDK_CONFIG_IPSEC_MB 00:12:17.649 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:17.649 #define SPDK_CONFIG_ISAL 1 00:12:17.649 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:17.649 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:17.649 #define SPDK_CONFIG_LIBDIR 00:12:17.649 #undef SPDK_CONFIG_LTO 00:12:17.649 #define SPDK_CONFIG_MAX_LCORES 128 00:12:17.649 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:17.649 #define SPDK_CONFIG_NVME_CUSE 1 00:12:17.649 #undef SPDK_CONFIG_OCF 00:12:17.649 #define SPDK_CONFIG_OCF_PATH 00:12:17.649 #define SPDK_CONFIG_OPENSSL_PATH 00:12:17.649 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:17.649 #define SPDK_CONFIG_PGO_DIR 00:12:17.649 #undef SPDK_CONFIG_PGO_USE 00:12:17.649 #define SPDK_CONFIG_PREFIX /usr/local 00:12:17.649 #undef SPDK_CONFIG_RAID5F 00:12:17.649 #undef SPDK_CONFIG_RBD 00:12:17.649 #define SPDK_CONFIG_RDMA 1 00:12:17.649 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:17.649 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:17.649 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:17.649 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:17.649 #define SPDK_CONFIG_SHARED 1 00:12:17.649 #undef SPDK_CONFIG_SMA 00:12:17.649 #define SPDK_CONFIG_TESTS 1 00:12:17.649 #undef SPDK_CONFIG_TSAN 00:12:17.649 #define SPDK_CONFIG_UBLK 1 00:12:17.649 #define SPDK_CONFIG_UBSAN 1 00:12:17.649 #undef SPDK_CONFIG_UNIT_TESTS 00:12:17.649 #undef SPDK_CONFIG_URING 00:12:17.649 #define SPDK_CONFIG_URING_PATH 00:12:17.649 #undef SPDK_CONFIG_URING_ZNS 00:12:17.649 #undef SPDK_CONFIG_USDT 00:12:17.649 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:17.649 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:17.649 #undef SPDK_CONFIG_VFIO_USER 00:12:17.649 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:17.649 #define SPDK_CONFIG_VHOST 1 00:12:17.649 #define SPDK_CONFIG_VIRTIO 1 00:12:17.649 #undef SPDK_CONFIG_VTUNE 00:12:17.649 #define SPDK_CONFIG_VTUNE_DIR 00:12:17.649 #define SPDK_CONFIG_WERROR 1 00:12:17.649 #define SPDK_CONFIG_WPDK_DIR 00:12:17.649 #define SPDK_CONFIG_XNVME 1 00:12:17.649 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:17.649 12:38:17 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:17.649 12:38:17 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:17.649 12:38:17 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.649 12:38:17 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.649 12:38:17 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.649 12:38:17 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.649 12:38:17 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.649 12:38:17 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.649 12:38:17 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.649 12:38:17 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:17.649 12:38:17 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.649 12:38:17 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:17.649 12:38:17 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:17.649 12:38:17 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:17.649 12:38:17 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:17.649 12:38:17 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:17.649 12:38:17 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:17.649 12:38:17 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:17.649 12:38:17 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:17.649 12:38:17 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:17.649 12:38:17 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:17.650 12:38:17 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:17.650 12:38:17 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:17.650 12:38:17 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:17.650 12:38:17 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:17.650 12:38:17 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:17.650 12:38:17 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:17.650 12:38:17 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:17.650 12:38:17 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:17.650 12:38:17 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:17.650 12:38:17 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:17.650 12:38:17 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:17.650 12:38:17 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:17.650 12:38:17 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:17.650 12:38:17 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:17.650 12:38:17 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70360 ]] 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70360 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.uUXOZa 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.uUXOZa/tests/xnvme /tmp/spdk.uUXOZa 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13971677184 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5596569600 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13971677184 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5596569600 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265249792 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:17.651 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98871087104 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=831692800 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:17.652 * Looking for test storage... 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13971677184 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:17.652 12:38:17 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:17.913 12:38:17 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.913 12:38:17 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:17.913 12:38:17 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.913 12:38:17 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:17.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.913 --rc genhtml_branch_coverage=1 00:12:17.914 --rc genhtml_function_coverage=1 00:12:17.914 --rc genhtml_legend=1 00:12:17.914 --rc geninfo_all_blocks=1 00:12:17.914 --rc geninfo_unexecuted_blocks=1 00:12:17.914 00:12:17.914 ' 00:12:17.914 12:38:17 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:17.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.914 --rc genhtml_branch_coverage=1 00:12:17.914 --rc genhtml_function_coverage=1 00:12:17.914 --rc genhtml_legend=1 00:12:17.914 --rc geninfo_all_blocks=1 00:12:17.914 --rc geninfo_unexecuted_blocks=1 00:12:17.914 00:12:17.914 ' 00:12:17.914 12:38:17 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:17.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.914 --rc genhtml_branch_coverage=1 00:12:17.914 --rc genhtml_function_coverage=1 00:12:17.914 --rc genhtml_legend=1 00:12:17.914 --rc geninfo_all_blocks=1 00:12:17.914 --rc geninfo_unexecuted_blocks=1 00:12:17.914 00:12:17.914 ' 00:12:17.914 12:38:17 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:17.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.914 --rc genhtml_branch_coverage=1 00:12:17.914 --rc genhtml_function_coverage=1 00:12:17.914 --rc genhtml_legend=1 00:12:17.914 --rc geninfo_all_blocks=1 00:12:17.914 --rc geninfo_unexecuted_blocks=1 00:12:17.914 00:12:17.914 ' 00:12:17.914 12:38:17 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:17.914 12:38:17 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.914 12:38:17 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.914 12:38:17 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.914 12:38:17 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.914 12:38:17 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.914 12:38:17 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.914 12:38:17 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.914 12:38:17 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:17.914 12:38:17 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:17.914 12:38:17 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:18.175 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:18.175 Waiting for block devices as requested 00:12:18.436 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:18.436 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:18.436 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:18.696 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.991 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:23.991 12:38:23 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:23.991 12:38:23 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:23.991 12:38:23 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:24.253 12:38:23 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:24.253 12:38:23 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:24.253 No valid GPT data, bailing 00:12:24.253 12:38:23 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:24.253 12:38:23 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:24.253 12:38:23 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:24.253 12:38:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:24.253 12:38:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:24.253 12:38:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.253 12:38:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:24.253 ************************************ 00:12:24.253 START TEST xnvme_rpc 00:12:24.253 ************************************ 00:12:24.253 12:38:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:24.253 12:38:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:24.253 12:38:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:24.253 12:38:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:24.253 12:38:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:24.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.253 12:38:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70745 00:12:24.253 12:38:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70745 00:12:24.253 12:38:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70745 ']' 00:12:24.253 12:38:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.253 12:38:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.253 12:38:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.253 12:38:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.253 12:38:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:24.253 12:38:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:24.514 [2024-12-14 12:38:24.053243] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:24.514 [2024-12-14 12:38:24.053374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70745 ] 00:12:24.514 [2024-12-14 12:38:24.218210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.775 [2024-12-14 12:38:24.336002] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.348 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.348 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:25.348 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:25.348 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.348 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.348 xnvme_bdev 00:12:25.348 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.348 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:25.348 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:25.348 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:25.348 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.348 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.348 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.348 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70745 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70745 ']' 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70745 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70745 00:12:25.608 killing process with pid 70745 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70745' 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70745 00:12:25.608 12:38:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70745 00:12:27.522 ************************************ 00:12:27.522 END TEST xnvme_rpc 00:12:27.522 ************************************ 00:12:27.522 00:12:27.522 real 0m2.890s 00:12:27.522 user 0m2.864s 00:12:27.522 sys 0m0.469s 00:12:27.522 12:38:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.522 12:38:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.522 12:38:26 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:27.522 12:38:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:27.522 12:38:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.522 12:38:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:27.522 ************************************ 00:12:27.522 START TEST xnvme_bdevperf 00:12:27.522 ************************************ 00:12:27.522 12:38:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:27.522 12:38:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:27.522 12:38:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:27.522 12:38:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:27.522 12:38:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:27.522 12:38:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:27.522 12:38:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:27.522 12:38:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:27.522 { 00:12:27.522 "subsystems": [ 00:12:27.522 { 00:12:27.522 "subsystem": "bdev", 00:12:27.522 "config": [ 00:12:27.523 { 00:12:27.523 "params": { 00:12:27.523 "io_mechanism": "libaio", 00:12:27.523 "conserve_cpu": false, 00:12:27.523 "filename": "/dev/nvme0n1", 00:12:27.523 "name": "xnvme_bdev" 00:12:27.523 }, 00:12:27.523 "method": "bdev_xnvme_create" 00:12:27.523 }, 00:12:27.523 { 00:12:27.523 "method": "bdev_wait_for_examine" 00:12:27.523 } 00:12:27.523 ] 00:12:27.523 } 00:12:27.523 ] 00:12:27.523 } 00:12:27.523 [2024-12-14 12:38:26.995134] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:27.523 [2024-12-14 12:38:26.995431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70819 ] 00:12:27.523 [2024-12-14 12:38:27.157341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.783 [2024-12-14 12:38:27.274041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.045 Running I/O for 5 seconds... 00:12:29.998 25667.00 IOPS, 100.26 MiB/s [2024-12-14T12:38:30.679Z] 25514.50 IOPS, 99.67 MiB/s [2024-12-14T12:38:31.621Z] 25973.33 IOPS, 101.46 MiB/s [2024-12-14T12:38:33.008Z] 25603.50 IOPS, 100.01 MiB/s [2024-12-14T12:38:33.008Z] 25628.20 IOPS, 100.11 MiB/s 00:12:33.271 Latency(us) 00:12:33.271 [2024-12-14T12:38:33.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:33.271 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:33.271 xnvme_bdev : 5.01 25602.36 100.01 0.00 0.00 2494.47 538.78 7108.14 00:12:33.271 [2024-12-14T12:38:33.008Z] =================================================================================================================== 00:12:33.271 [2024-12-14T12:38:33.008Z] Total : 25602.36 100.01 0.00 0.00 2494.47 538.78 7108.14 00:12:33.842 12:38:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:33.842 12:38:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:33.842 12:38:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:33.842 12:38:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:33.842 12:38:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:33.842 { 00:12:33.842 "subsystems": [ 00:12:33.842 { 00:12:33.842 "subsystem": "bdev", 00:12:33.842 "config": [ 00:12:33.842 { 00:12:33.842 "params": { 00:12:33.842 "io_mechanism": "libaio", 00:12:33.842 "conserve_cpu": false, 00:12:33.842 "filename": "/dev/nvme0n1", 00:12:33.842 "name": "xnvme_bdev" 00:12:33.842 }, 00:12:33.842 "method": "bdev_xnvme_create" 00:12:33.842 }, 00:12:33.842 { 00:12:33.842 "method": "bdev_wait_for_examine" 00:12:33.842 } 00:12:33.842 ] 00:12:33.842 } 00:12:33.842 ] 00:12:33.842 } 00:12:33.842 [2024-12-14 12:38:33.450882] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:33.842 [2024-12-14 12:38:33.451632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70900 ] 00:12:34.103 [2024-12-14 12:38:33.616706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.103 [2024-12-14 12:38:33.732925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.365 Running I/O for 5 seconds... 00:12:36.692 31273.00 IOPS, 122.16 MiB/s [2024-12-14T12:38:37.369Z] 32564.50 IOPS, 127.21 MiB/s [2024-12-14T12:38:38.313Z] 32230.33 IOPS, 125.90 MiB/s [2024-12-14T12:38:39.255Z] 32479.25 IOPS, 126.87 MiB/s [2024-12-14T12:38:39.255Z] 32813.20 IOPS, 128.18 MiB/s 00:12:39.518 Latency(us) 00:12:39.518 [2024-12-14T12:38:39.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.518 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:39.518 xnvme_bdev : 5.00 32797.81 128.12 0.00 0.00 1946.94 460.01 7007.31 00:12:39.518 [2024-12-14T12:38:39.255Z] =================================================================================================================== 00:12:39.518 [2024-12-14T12:38:39.255Z] Total : 32797.81 128.12 0.00 0.00 1946.94 460.01 7007.31 00:12:40.462 00:12:40.462 real 0m12.913s 00:12:40.462 user 0m4.712s 00:12:40.462 sys 0m6.649s 00:12:40.462 12:38:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.462 ************************************ 00:12:40.462 12:38:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:40.462 END TEST xnvme_bdevperf 00:12:40.462 ************************************ 00:12:40.462 12:38:39 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:40.462 12:38:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:40.462 12:38:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.462 12:38:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:40.462 ************************************ 00:12:40.462 START TEST xnvme_fio_plugin 00:12:40.462 ************************************ 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:40.462 12:38:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:40.462 { 00:12:40.462 "subsystems": [ 00:12:40.462 { 00:12:40.462 "subsystem": "bdev", 00:12:40.462 "config": [ 00:12:40.462 { 00:12:40.462 "params": { 00:12:40.462 "io_mechanism": "libaio", 00:12:40.462 "conserve_cpu": false, 00:12:40.462 "filename": "/dev/nvme0n1", 00:12:40.462 "name": "xnvme_bdev" 00:12:40.462 }, 00:12:40.462 "method": "bdev_xnvme_create" 00:12:40.462 }, 00:12:40.462 { 00:12:40.462 "method": "bdev_wait_for_examine" 00:12:40.462 } 00:12:40.462 ] 00:12:40.462 } 00:12:40.462 ] 00:12:40.462 } 00:12:40.462 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:40.462 fio-3.35 00:12:40.462 Starting 1 thread 00:12:47.052 00:12:47.052 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71014: Sat Dec 14 12:38:45 2024 00:12:47.052 read: IOPS=31.5k, BW=123MiB/s (129MB/s)(615MiB/5001msec) 00:12:47.052 slat (usec): min=4, max=1844, avg=23.77, stdev=99.09 00:12:47.052 clat (usec): min=22, max=5032, avg=1385.66, stdev=541.07 00:12:47.052 lat (usec): min=102, max=5045, avg=1409.43, stdev=530.62 00:12:47.052 clat percentiles (usec): 00:12:47.052 | 1.00th=[ 277], 5.00th=[ 529], 10.00th=[ 709], 20.00th=[ 938], 00:12:47.052 | 30.00th=[ 1106], 40.00th=[ 1254], 50.00th=[ 1369], 60.00th=[ 1500], 00:12:47.052 | 70.00th=[ 1631], 80.00th=[ 1778], 90.00th=[ 2040], 95.00th=[ 2278], 00:12:47.052 | 99.00th=[ 2933], 99.50th=[ 3294], 99.90th=[ 4047], 99.95th=[ 4228], 00:12:47.052 | 99.99th=[ 4555] 00:12:47.052 bw ( KiB/s): min=113688, max=136528, per=99.12%, avg=124852.44, stdev=6468.69, samples=9 00:12:47.052 iops : min=28422, max=34132, avg=31213.11, stdev=1617.17, samples=9 00:12:47.052 lat (usec) : 50=0.01%, 250=0.66%, 500=3.73%, 750=7.17%, 1000=11.72% 00:12:47.052 lat (msec) : 2=65.60%, 4=11.00%, 10=0.12% 00:12:47.052 cpu : usr=37.36%, sys=53.78%, ctx=18, majf=0, minf=764 00:12:47.052 IO depths : 1=0.4%, 2=1.1%, 4=2.8%, 8=8.1%, 16=23.0%, 32=62.5%, >=64=2.1% 00:12:47.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:47.052 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:12:47.052 issued rwts: total=157483,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:47.052 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:47.052 00:12:47.052 Run status group 0 (all jobs): 00:12:47.052 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=615MiB (645MB), run=5001-5001msec 00:12:47.313 ----------------------------------------------------- 00:12:47.313 Suppressions used: 00:12:47.313 count bytes template 00:12:47.313 1 11 /usr/src/fio/parse.c 00:12:47.313 1 8 libtcmalloc_minimal.so 00:12:47.313 1 904 libcrypto.so 00:12:47.313 ----------------------------------------------------- 00:12:47.313 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:47.313 12:38:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:47.313 { 00:12:47.313 "subsystems": [ 00:12:47.313 { 00:12:47.313 "subsystem": "bdev", 00:12:47.313 "config": [ 00:12:47.313 { 00:12:47.313 "params": { 00:12:47.313 "io_mechanism": "libaio", 00:12:47.313 "conserve_cpu": false, 00:12:47.313 "filename": "/dev/nvme0n1", 00:12:47.313 "name": "xnvme_bdev" 00:12:47.313 }, 00:12:47.313 "method": "bdev_xnvme_create" 00:12:47.313 }, 00:12:47.313 { 00:12:47.313 "method": "bdev_wait_for_examine" 00:12:47.313 } 00:12:47.313 ] 00:12:47.313 } 00:12:47.313 ] 00:12:47.313 } 00:12:47.574 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:47.574 fio-3.35 00:12:47.574 Starting 1 thread 00:12:54.163 00:12:54.163 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71111: Sat Dec 14 12:38:52 2024 00:12:54.163 write: IOPS=32.0k, BW=125MiB/s (131MB/s)(626MiB/5002msec); 0 zone resets 00:12:54.163 slat (usec): min=4, max=3337, avg=25.79, stdev=87.28 00:12:54.163 clat (usec): min=104, max=8234, avg=1284.62, stdev=581.29 00:12:54.163 lat (usec): min=187, max=8239, avg=1310.41, stdev=574.31 00:12:54.163 clat percentiles (usec): 00:12:54.163 | 1.00th=[ 255], 5.00th=[ 424], 10.00th=[ 578], 20.00th=[ 791], 00:12:54.163 | 30.00th=[ 955], 40.00th=[ 1090], 50.00th=[ 1237], 60.00th=[ 1401], 00:12:54.163 | 70.00th=[ 1549], 80.00th=[ 1729], 90.00th=[ 1991], 95.00th=[ 2245], 00:12:54.163 | 99.00th=[ 2999], 99.50th=[ 3326], 99.90th=[ 4047], 99.95th=[ 4359], 00:12:54.163 | 99.99th=[ 6718] 00:12:54.163 bw ( KiB/s): min=118440, max=135936, per=99.77%, avg=127872.00, stdev=5283.50, samples=9 00:12:54.163 iops : min=29610, max=33984, avg=31968.00, stdev=1320.87, samples=9 00:12:54.163 lat (usec) : 250=0.94%, 500=6.42%, 750=10.60%, 1000=15.41% 00:12:54.163 lat (msec) : 2=56.92%, 4=9.61%, 10=0.11% 00:12:54.163 cpu : usr=30.29%, sys=57.41%, ctx=13, majf=0, minf=765 00:12:54.163 IO depths : 1=0.3%, 2=0.8%, 4=2.8%, 8=8.6%, 16=24.3%, 32=61.3%, >=64=2.0% 00:12:54.163 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:54.163 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:54.163 issued rwts: total=0,160266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:54.163 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:54.163 00:12:54.163 Run status group 0 (all jobs): 00:12:54.163 WRITE: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=626MiB (656MB), run=5002-5002msec 00:12:54.163 ----------------------------------------------------- 00:12:54.163 Suppressions used: 00:12:54.163 count bytes template 00:12:54.163 1 11 /usr/src/fio/parse.c 00:12:54.163 1 8 libtcmalloc_minimal.so 00:12:54.163 1 904 libcrypto.so 00:12:54.163 ----------------------------------------------------- 00:12:54.163 00:12:54.163 ************************************ 00:12:54.163 END TEST xnvme_fio_plugin 00:12:54.164 ************************************ 00:12:54.164 00:12:54.164 real 0m13.949s 00:12:54.164 user 0m6.240s 00:12:54.164 sys 0m6.246s 00:12:54.164 12:38:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.164 12:38:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:54.425 12:38:53 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:54.425 12:38:53 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:54.425 12:38:53 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:54.425 12:38:53 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:54.425 12:38:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:54.425 12:38:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.425 12:38:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:54.425 ************************************ 00:12:54.425 START TEST xnvme_rpc 00:12:54.425 ************************************ 00:12:54.425 12:38:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:54.425 12:38:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:54.425 12:38:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:54.425 12:38:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:54.425 12:38:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:54.425 12:38:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71194 00:12:54.425 12:38:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71194 00:12:54.425 12:38:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71194 ']' 00:12:54.425 12:38:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.425 12:38:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.425 12:38:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.425 12:38:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.425 12:38:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.425 12:38:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:54.425 [2024-12-14 12:38:54.017959] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:54.425 [2024-12-14 12:38:54.018161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71194 ] 00:12:54.686 [2024-12-14 12:38:54.194210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.686 [2024-12-14 12:38:54.313291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.627 xnvme_bdev 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71194 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71194 ']' 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71194 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71194 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71194' 00:12:55.627 killing process with pid 71194 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71194 00:12:55.627 12:38:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71194 00:12:57.539 00:12:57.539 real 0m2.906s 00:12:57.539 user 0m2.867s 00:12:57.539 sys 0m0.501s 00:12:57.539 12:38:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.539 ************************************ 00:12:57.539 END TEST xnvme_rpc 00:12:57.539 ************************************ 00:12:57.539 12:38:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.539 12:38:56 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:57.539 12:38:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:57.539 12:38:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.539 12:38:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.539 ************************************ 00:12:57.539 START TEST xnvme_bdevperf 00:12:57.539 ************************************ 00:12:57.539 12:38:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:57.539 12:38:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:57.539 12:38:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:57.539 12:38:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:57.539 12:38:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:57.539 12:38:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:57.539 12:38:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:57.539 12:38:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:57.540 { 00:12:57.540 "subsystems": [ 00:12:57.540 { 00:12:57.540 "subsystem": "bdev", 00:12:57.540 "config": [ 00:12:57.540 { 00:12:57.540 "params": { 00:12:57.540 "io_mechanism": "libaio", 00:12:57.540 "conserve_cpu": true, 00:12:57.540 "filename": "/dev/nvme0n1", 00:12:57.540 "name": "xnvme_bdev" 00:12:57.540 }, 00:12:57.540 "method": "bdev_xnvme_create" 00:12:57.540 }, 00:12:57.540 { 00:12:57.540 "method": "bdev_wait_for_examine" 00:12:57.540 } 00:12:57.540 ] 00:12:57.540 } 00:12:57.540 ] 00:12:57.540 } 00:12:57.540 [2024-12-14 12:38:56.977723] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:57.540 [2024-12-14 12:38:56.977880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71262 ] 00:12:57.540 [2024-12-14 12:38:57.145991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.540 [2024-12-14 12:38:57.262446] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.112 Running I/O for 5 seconds... 00:13:00.053 33568.00 IOPS, 131.12 MiB/s [2024-12-14T12:39:00.733Z] 31810.00 IOPS, 124.26 MiB/s [2024-12-14T12:39:01.676Z] 31092.33 IOPS, 121.45 MiB/s [2024-12-14T12:39:02.620Z] 31014.75 IOPS, 121.15 MiB/s [2024-12-14T12:39:02.620Z] 31264.80 IOPS, 122.13 MiB/s 00:13:02.883 Latency(us) 00:13:02.883 [2024-12-14T12:39:02.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:02.884 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:02.884 xnvme_bdev : 5.01 31223.43 121.97 0.00 0.00 2045.10 450.56 7309.78 00:13:02.884 [2024-12-14T12:39:02.621Z] =================================================================================================================== 00:13:02.884 [2024-12-14T12:39:02.621Z] Total : 31223.43 121.97 0.00 0.00 2045.10 450.56 7309.78 00:13:03.828 12:39:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:03.828 12:39:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:03.828 12:39:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:03.828 12:39:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:03.828 12:39:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:03.828 { 00:13:03.828 "subsystems": [ 00:13:03.829 { 00:13:03.829 "subsystem": "bdev", 00:13:03.829 "config": [ 00:13:03.829 { 00:13:03.829 "params": { 00:13:03.829 "io_mechanism": "libaio", 00:13:03.829 "conserve_cpu": true, 00:13:03.829 "filename": "/dev/nvme0n1", 00:13:03.829 "name": "xnvme_bdev" 00:13:03.829 }, 00:13:03.829 "method": "bdev_xnvme_create" 00:13:03.829 }, 00:13:03.829 { 00:13:03.829 "method": "bdev_wait_for_examine" 00:13:03.829 } 00:13:03.829 ] 00:13:03.829 } 00:13:03.829 ] 00:13:03.829 } 00:13:03.829 [2024-12-14 12:39:03.453764] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:03.829 [2024-12-14 12:39:03.453903] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71343 ] 00:13:04.090 [2024-12-14 12:39:03.619939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.090 [2024-12-14 12:39:03.736916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.351 Running I/O for 5 seconds... 00:13:06.314 35761.00 IOPS, 139.69 MiB/s [2024-12-14T12:39:07.438Z] 36648.00 IOPS, 143.16 MiB/s [2024-12-14T12:39:08.380Z] 36334.33 IOPS, 141.93 MiB/s [2024-12-14T12:39:09.326Z] 35868.25 IOPS, 140.11 MiB/s [2024-12-14T12:39:09.326Z] 35387.40 IOPS, 138.23 MiB/s 00:13:09.589 Latency(us) 00:13:09.589 [2024-12-14T12:39:09.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.589 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:09.589 xnvme_bdev : 5.01 35356.65 138.11 0.00 0.00 1805.40 450.56 7763.50 00:13:09.589 [2024-12-14T12:39:09.326Z] =================================================================================================================== 00:13:09.589 [2024-12-14T12:39:09.326Z] Total : 35356.65 138.11 0.00 0.00 1805.40 450.56 7763.50 00:13:10.160 00:13:10.160 real 0m12.963s 00:13:10.160 user 0m5.144s 00:13:10.160 sys 0m6.217s 00:13:10.160 12:39:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.160 ************************************ 00:13:10.160 END TEST xnvme_bdevperf 00:13:10.160 ************************************ 00:13:10.160 12:39:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:10.420 12:39:09 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:10.420 12:39:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:10.420 12:39:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.420 12:39:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.420 ************************************ 00:13:10.420 START TEST xnvme_fio_plugin 00:13:10.420 ************************************ 00:13:10.420 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:10.420 12:39:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:10.420 12:39:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:10.420 12:39:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:10.420 12:39:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.420 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.420 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:10.420 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:10.420 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:10.420 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.420 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:10.420 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:10.421 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:10.421 12:39:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:10.421 12:39:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:10.421 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:10.421 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:10.421 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.421 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:10.421 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:10.421 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:10.421 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:10.421 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:10.421 12:39:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.421 { 00:13:10.421 "subsystems": [ 00:13:10.421 { 00:13:10.421 "subsystem": "bdev", 00:13:10.421 "config": [ 00:13:10.421 { 00:13:10.421 "params": { 00:13:10.421 "io_mechanism": "libaio", 00:13:10.421 "conserve_cpu": true, 00:13:10.421 "filename": "/dev/nvme0n1", 00:13:10.421 "name": "xnvme_bdev" 00:13:10.421 }, 00:13:10.421 "method": "bdev_xnvme_create" 00:13:10.421 }, 00:13:10.421 { 00:13:10.421 "method": "bdev_wait_for_examine" 00:13:10.421 } 00:13:10.421 ] 00:13:10.421 } 00:13:10.421 ] 00:13:10.421 } 00:13:10.421 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:10.421 fio-3.35 00:13:10.421 Starting 1 thread 00:13:17.002 00:13:17.002 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71462: Sat Dec 14 12:39:15 2024 00:13:17.002 read: IOPS=31.8k, BW=124MiB/s (130MB/s)(622MiB/5001msec) 00:13:17.002 slat (usec): min=4, max=2136, avg=23.88, stdev=103.10 00:13:17.002 clat (usec): min=107, max=4862, avg=1376.79, stdev=534.60 00:13:17.002 lat (usec): min=206, max=4947, avg=1400.67, stdev=523.71 00:13:17.002 clat percentiles (usec): 00:13:17.002 | 1.00th=[ 285], 5.00th=[ 578], 10.00th=[ 734], 20.00th=[ 930], 00:13:17.002 | 30.00th=[ 1090], 40.00th=[ 1221], 50.00th=[ 1352], 60.00th=[ 1483], 00:13:17.002 | 70.00th=[ 1614], 80.00th=[ 1778], 90.00th=[ 2024], 95.00th=[ 2278], 00:13:17.002 | 99.00th=[ 2933], 99.50th=[ 3294], 99.90th=[ 3916], 99.95th=[ 4080], 00:13:17.002 | 99.99th=[ 4424] 00:13:17.002 bw ( KiB/s): min=122520, max=131448, per=100.00%, avg=127728.00, stdev=2866.33, samples=9 00:13:17.002 iops : min=30630, max=32862, avg=31932.00, stdev=716.58, samples=9 00:13:17.002 lat (usec) : 250=0.60%, 500=3.06%, 750=7.07%, 1000=13.58% 00:13:17.002 lat (msec) : 2=64.94%, 4=10.67%, 10=0.08% 00:13:17.002 cpu : usr=37.22%, sys=54.24%, ctx=14, majf=0, minf=764 00:13:17.002 IO depths : 1=0.4%, 2=1.0%, 4=2.7%, 8=7.7%, 16=22.5%, 32=63.6%, >=64=2.2% 00:13:17.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.002 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:13:17.002 issued rwts: total=159162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.002 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.002 00:13:17.002 Run status group 0 (all jobs): 00:13:17.002 READ: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=622MiB (652MB), run=5001-5001msec 00:13:17.263 ----------------------------------------------------- 00:13:17.263 Suppressions used: 00:13:17.263 count bytes template 00:13:17.263 1 11 /usr/src/fio/parse.c 00:13:17.263 1 8 libtcmalloc_minimal.so 00:13:17.263 1 904 libcrypto.so 00:13:17.263 ----------------------------------------------------- 00:13:17.263 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:17.263 12:39:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:17.263 { 00:13:17.263 "subsystems": [ 00:13:17.263 { 00:13:17.263 "subsystem": "bdev", 00:13:17.263 "config": [ 00:13:17.263 { 00:13:17.263 "params": { 00:13:17.263 "io_mechanism": "libaio", 00:13:17.263 "conserve_cpu": true, 00:13:17.263 "filename": "/dev/nvme0n1", 00:13:17.263 "name": "xnvme_bdev" 00:13:17.263 }, 00:13:17.263 "method": "bdev_xnvme_create" 00:13:17.263 }, 00:13:17.263 { 00:13:17.263 "method": "bdev_wait_for_examine" 00:13:17.263 } 00:13:17.263 ] 00:13:17.263 } 00:13:17.263 ] 00:13:17.263 } 00:13:17.524 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:17.524 fio-3.35 00:13:17.524 Starting 1 thread 00:13:24.108 00:13:24.108 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71554: Sat Dec 14 12:39:22 2024 00:13:24.108 write: IOPS=32.2k, BW=126MiB/s (132MB/s)(629MiB/5001msec); 0 zone resets 00:13:24.108 slat (usec): min=4, max=2526, avg=24.61, stdev=93.21 00:13:24.108 clat (usec): min=108, max=8334, avg=1321.97, stdev=561.68 00:13:24.108 lat (usec): min=187, max=8341, avg=1346.58, stdev=554.29 00:13:24.108 clat percentiles (usec): 00:13:24.108 | 1.00th=[ 273], 5.00th=[ 510], 10.00th=[ 660], 20.00th=[ 865], 00:13:24.108 | 30.00th=[ 1020], 40.00th=[ 1156], 50.00th=[ 1287], 60.00th=[ 1401], 00:13:24.108 | 70.00th=[ 1549], 80.00th=[ 1729], 90.00th=[ 1991], 95.00th=[ 2278], 00:13:24.108 | 99.00th=[ 2999], 99.50th=[ 3294], 99.90th=[ 4359], 99.95th=[ 4621], 00:13:24.108 | 99.99th=[ 8029] 00:13:24.108 bw ( KiB/s): min=120568, max=140096, per=100.00%, avg=129485.33, stdev=5725.42, samples=9 00:13:24.108 iops : min=30142, max=35024, avg=32371.33, stdev=1431.35, samples=9 00:13:24.108 lat (usec) : 250=0.72%, 500=4.08%, 750=9.30%, 1000=14.65% 00:13:24.108 lat (msec) : 2=61.50%, 4=9.61%, 10=0.14% 00:13:24.108 cpu : usr=35.52%, sys=54.70%, ctx=10, majf=0, minf=765 00:13:24.108 IO depths : 1=0.3%, 2=0.9%, 4=2.8%, 8=8.5%, 16=24.0%, 32=61.3%, >=64=2.0% 00:13:24.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:24.108 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:13:24.108 issued rwts: total=0,161111,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:24.108 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:24.108 00:13:24.108 Run status group 0 (all jobs): 00:13:24.108 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=629MiB (660MB), run=5001-5001msec 00:13:24.370 ----------------------------------------------------- 00:13:24.370 Suppressions used: 00:13:24.370 count bytes template 00:13:24.370 1 11 /usr/src/fio/parse.c 00:13:24.370 1 8 libtcmalloc_minimal.so 00:13:24.370 1 904 libcrypto.so 00:13:24.370 ----------------------------------------------------- 00:13:24.370 00:13:24.370 00:13:24.370 real 0m13.980s 00:13:24.370 user 0m6.546s 00:13:24.370 sys 0m6.127s 00:13:24.370 12:39:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.370 ************************************ 00:13:24.370 END TEST xnvme_fio_plugin 00:13:24.370 12:39:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:24.370 ************************************ 00:13:24.370 12:39:23 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:24.370 12:39:23 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:24.370 12:39:23 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:24.370 12:39:23 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:24.370 12:39:23 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:24.370 12:39:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:24.370 12:39:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:24.370 12:39:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:24.370 12:39:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:24.370 12:39:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:24.370 12:39:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.370 12:39:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.370 ************************************ 00:13:24.370 START TEST xnvme_rpc 00:13:24.370 ************************************ 00:13:24.370 12:39:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:24.370 12:39:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:24.370 12:39:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:24.370 12:39:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:24.370 12:39:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:24.370 12:39:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71640 00:13:24.370 12:39:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71640 00:13:24.370 12:39:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71640 ']' 00:13:24.370 12:39:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.370 12:39:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.370 12:39:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.370 12:39:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.370 12:39:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.370 12:39:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:24.370 [2024-12-14 12:39:24.086752] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:24.370 [2024-12-14 12:39:24.086966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71640 ] 00:13:24.631 [2024-12-14 12:39:24.260859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.893 [2024-12-14 12:39:24.377260] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.471 xnvme_bdev 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.471 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:25.472 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:25.472 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:25.472 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.472 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.472 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:25.472 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.472 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:25.472 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:25.472 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:25.472 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:25.472 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.472 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71640 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71640 ']' 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71640 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71640 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71640' 00:13:25.734 killing process with pid 71640 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71640 00:13:25.734 12:39:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71640 00:13:27.651 00:13:27.651 real 0m2.912s 00:13:27.651 user 0m2.922s 00:13:27.651 sys 0m0.494s 00:13:27.651 12:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.651 ************************************ 00:13:27.651 END TEST xnvme_rpc 00:13:27.651 ************************************ 00:13:27.651 12:39:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.651 12:39:26 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:27.651 12:39:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:27.651 12:39:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.651 12:39:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.651 ************************************ 00:13:27.651 START TEST xnvme_bdevperf 00:13:27.651 ************************************ 00:13:27.651 12:39:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:27.651 12:39:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:27.651 12:39:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:27.651 12:39:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:27.651 12:39:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:27.651 12:39:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:27.651 12:39:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:27.651 12:39:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:27.651 { 00:13:27.651 "subsystems": [ 00:13:27.651 { 00:13:27.651 "subsystem": "bdev", 00:13:27.651 "config": [ 00:13:27.651 { 00:13:27.651 "params": { 00:13:27.651 "io_mechanism": "io_uring", 00:13:27.651 "conserve_cpu": false, 00:13:27.651 "filename": "/dev/nvme0n1", 00:13:27.651 "name": "xnvme_bdev" 00:13:27.651 }, 00:13:27.651 "method": "bdev_xnvme_create" 00:13:27.651 }, 00:13:27.651 { 00:13:27.651 "method": "bdev_wait_for_examine" 00:13:27.651 } 00:13:27.651 ] 00:13:27.651 } 00:13:27.651 ] 00:13:27.651 } 00:13:27.651 [2024-12-14 12:39:27.038375] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:27.651 [2024-12-14 12:39:27.038522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71713 ] 00:13:27.651 [2024-12-14 12:39:27.208999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.651 [2024-12-14 12:39:27.326028] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.913 Running I/O for 5 seconds... 00:13:30.274 32694.00 IOPS, 127.71 MiB/s [2024-12-14T12:39:30.953Z] 32818.00 IOPS, 128.20 MiB/s [2024-12-14T12:39:31.894Z] 33042.67 IOPS, 129.07 MiB/s [2024-12-14T12:39:32.836Z] 32969.50 IOPS, 128.79 MiB/s [2024-12-14T12:39:32.836Z] 33094.40 IOPS, 129.28 MiB/s 00:13:33.099 Latency(us) 00:13:33.099 [2024-12-14T12:39:32.836Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.099 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:33.099 xnvme_bdev : 5.01 33070.21 129.18 0.00 0.00 1931.16 466.31 10132.87 00:13:33.099 [2024-12-14T12:39:32.836Z] =================================================================================================================== 00:13:33.099 [2024-12-14T12:39:32.836Z] Total : 33070.21 129.18 0.00 0.00 1931.16 466.31 10132.87 00:13:34.040 12:39:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:34.040 12:39:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:34.040 12:39:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:34.040 12:39:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:34.040 12:39:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:34.040 { 00:13:34.040 "subsystems": [ 00:13:34.040 { 00:13:34.040 "subsystem": "bdev", 00:13:34.040 "config": [ 00:13:34.040 { 00:13:34.040 "params": { 00:13:34.040 "io_mechanism": "io_uring", 00:13:34.040 "conserve_cpu": false, 00:13:34.040 "filename": "/dev/nvme0n1", 00:13:34.040 "name": "xnvme_bdev" 00:13:34.040 }, 00:13:34.040 "method": "bdev_xnvme_create" 00:13:34.040 }, 00:13:34.040 { 00:13:34.040 "method": "bdev_wait_for_examine" 00:13:34.040 } 00:13:34.040 ] 00:13:34.040 } 00:13:34.040 ] 00:13:34.040 } 00:13:34.040 [2024-12-14 12:39:33.488102] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:34.040 [2024-12-14 12:39:33.488254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71784 ] 00:13:34.040 [2024-12-14 12:39:33.653177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.040 [2024-12-14 12:39:33.770792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.613 Running I/O for 5 seconds... 00:13:36.505 34038.00 IOPS, 132.96 MiB/s [2024-12-14T12:39:37.185Z] 34035.50 IOPS, 132.95 MiB/s [2024-12-14T12:39:38.129Z] 34279.67 IOPS, 133.90 MiB/s [2024-12-14T12:39:39.073Z] 34213.50 IOPS, 133.65 MiB/s 00:13:39.336 Latency(us) 00:13:39.336 [2024-12-14T12:39:39.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.336 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:39.336 xnvme_bdev : 5.00 34259.96 133.83 0.00 0.00 1863.98 327.68 8418.86 00:13:39.336 [2024-12-14T12:39:39.073Z] =================================================================================================================== 00:13:39.336 [2024-12-14T12:39:39.073Z] Total : 34259.96 133.83 0.00 0.00 1863.98 327.68 8418.86 00:13:40.280 00:13:40.280 real 0m12.879s 00:13:40.280 user 0m6.049s 00:13:40.280 sys 0m6.561s 00:13:40.280 12:39:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.280 ************************************ 00:13:40.280 END TEST xnvme_bdevperf 00:13:40.280 ************************************ 00:13:40.280 12:39:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:40.280 12:39:39 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:40.280 12:39:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:40.280 12:39:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.280 12:39:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.280 ************************************ 00:13:40.280 START TEST xnvme_fio_plugin 00:13:40.280 ************************************ 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:40.280 12:39:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:40.280 { 00:13:40.280 "subsystems": [ 00:13:40.280 { 00:13:40.280 "subsystem": "bdev", 00:13:40.280 "config": [ 00:13:40.280 { 00:13:40.280 "params": { 00:13:40.280 "io_mechanism": "io_uring", 00:13:40.280 "conserve_cpu": false, 00:13:40.280 "filename": "/dev/nvme0n1", 00:13:40.280 "name": "xnvme_bdev" 00:13:40.280 }, 00:13:40.280 "method": "bdev_xnvme_create" 00:13:40.280 }, 00:13:40.280 { 00:13:40.280 "method": "bdev_wait_for_examine" 00:13:40.280 } 00:13:40.280 ] 00:13:40.280 } 00:13:40.280 ] 00:13:40.280 } 00:13:40.542 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:40.542 fio-3.35 00:13:40.542 Starting 1 thread 00:13:47.132 00:13:47.132 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71903: Sat Dec 14 12:39:45 2024 00:13:47.132 read: IOPS=32.8k, BW=128MiB/s (135MB/s)(642MiB/5002msec) 00:13:47.132 slat (nsec): min=2877, max=83780, avg=3639.42, stdev=1989.96 00:13:47.132 clat (usec): min=791, max=7059, avg=1798.68, stdev=278.78 00:13:47.132 lat (usec): min=794, max=7072, avg=1802.32, stdev=279.08 00:13:47.132 clat percentiles (usec): 00:13:47.132 | 1.00th=[ 1352], 5.00th=[ 1450], 10.00th=[ 1500], 20.00th=[ 1582], 00:13:47.132 | 30.00th=[ 1647], 40.00th=[ 1696], 50.00th=[ 1762], 60.00th=[ 1827], 00:13:47.132 | 70.00th=[ 1893], 80.00th=[ 1991], 90.00th=[ 2147], 95.00th=[ 2278], 00:13:47.132 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 3130], 99.95th=[ 3359], 00:13:47.132 | 99.99th=[ 6980] 00:13:47.132 bw ( KiB/s): min=124416, max=133632, per=99.99%, avg=131354.67, stdev=2940.91, samples=9 00:13:47.132 iops : min=31104, max=33408, avg=32838.67, stdev=735.23, samples=9 00:13:47.132 lat (usec) : 1000=0.02% 00:13:47.132 lat (msec) : 2=80.65%, 4=19.29%, 10=0.04% 00:13:47.132 cpu : usr=31.43%, sys=67.27%, ctx=11, majf=0, minf=762 00:13:47.132 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:47.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.132 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:47.132 issued rwts: total=164269,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.132 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:47.132 00:13:47.132 Run status group 0 (all jobs): 00:13:47.132 READ: bw=128MiB/s (135MB/s), 128MiB/s-128MiB/s (135MB/s-135MB/s), io=642MiB (673MB), run=5002-5002msec 00:13:47.132 ----------------------------------------------------- 00:13:47.132 Suppressions used: 00:13:47.132 count bytes template 00:13:47.132 1 11 /usr/src/fio/parse.c 00:13:47.132 1 8 libtcmalloc_minimal.so 00:13:47.132 1 904 libcrypto.so 00:13:47.132 ----------------------------------------------------- 00:13:47.132 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:47.132 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:47.394 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:47.394 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:47.394 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:47.394 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:47.394 12:39:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:47.394 { 00:13:47.394 "subsystems": [ 00:13:47.394 { 00:13:47.394 "subsystem": "bdev", 00:13:47.394 "config": [ 00:13:47.394 { 00:13:47.394 "params": { 00:13:47.394 "io_mechanism": "io_uring", 00:13:47.394 "conserve_cpu": false, 00:13:47.394 "filename": "/dev/nvme0n1", 00:13:47.394 "name": "xnvme_bdev" 00:13:47.394 }, 00:13:47.394 "method": "bdev_xnvme_create" 00:13:47.394 }, 00:13:47.394 { 00:13:47.394 "method": "bdev_wait_for_examine" 00:13:47.394 } 00:13:47.394 ] 00:13:47.394 } 00:13:47.394 ] 00:13:47.394 } 00:13:47.394 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:47.394 fio-3.35 00:13:47.394 Starting 1 thread 00:13:53.981 00:13:53.981 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71995: Sat Dec 14 12:39:52 2024 00:13:53.981 write: IOPS=33.8k, BW=132MiB/s (138MB/s)(660MiB/5002msec); 0 zone resets 00:13:53.981 slat (nsec): min=2923, max=79085, avg=3933.22, stdev=1905.29 00:13:53.981 clat (usec): min=454, max=9073, avg=1739.24, stdev=246.71 00:13:53.981 lat (usec): min=463, max=9076, avg=1743.18, stdev=247.06 00:13:53.981 clat percentiles (usec): 00:13:53.981 | 1.00th=[ 1336], 5.00th=[ 1418], 10.00th=[ 1483], 20.00th=[ 1549], 00:13:53.981 | 30.00th=[ 1598], 40.00th=[ 1647], 50.00th=[ 1713], 60.00th=[ 1762], 00:13:53.981 | 70.00th=[ 1827], 80.00th=[ 1909], 90.00th=[ 2040], 95.00th=[ 2147], 00:13:53.981 | 99.00th=[ 2474], 99.50th=[ 2638], 99.90th=[ 3097], 99.95th=[ 3294], 00:13:53.981 | 99.99th=[ 6849] 00:13:53.981 bw ( KiB/s): min=132216, max=138016, per=100.00%, avg=135365.33, stdev=1754.54, samples=9 00:13:53.981 iops : min=33054, max=34502, avg=33841.33, stdev=438.19, samples=9 00:13:53.981 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:13:53.981 lat (msec) : 2=87.94%, 4=11.99%, 10=0.02% 00:13:53.981 cpu : usr=32.23%, sys=66.55%, ctx=15, majf=0, minf=763 00:13:53.981 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:13:53.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.981 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:53.981 issued rwts: total=0,168882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.981 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:53.981 00:13:53.981 Run status group 0 (all jobs): 00:13:53.981 WRITE: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=660MiB (692MB), run=5002-5002msec 00:13:54.243 ----------------------------------------------------- 00:13:54.243 Suppressions used: 00:13:54.243 count bytes template 00:13:54.243 1 11 /usr/src/fio/parse.c 00:13:54.243 1 8 libtcmalloc_minimal.so 00:13:54.243 1 904 libcrypto.so 00:13:54.243 ----------------------------------------------------- 00:13:54.243 00:13:54.243 00:13:54.243 real 0m13.867s 00:13:54.243 user 0m6.126s 00:13:54.243 sys 0m7.278s 00:13:54.243 12:39:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.243 ************************************ 00:13:54.243 END TEST xnvme_fio_plugin 00:13:54.243 ************************************ 00:13:54.243 12:39:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:54.243 12:39:53 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:54.243 12:39:53 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:54.243 12:39:53 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:54.243 12:39:53 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:54.243 12:39:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:54.243 12:39:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.243 12:39:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:54.243 ************************************ 00:13:54.243 START TEST xnvme_rpc 00:13:54.243 ************************************ 00:13:54.243 12:39:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:54.243 12:39:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:54.243 12:39:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:54.243 12:39:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:54.243 12:39:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:54.243 12:39:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72081 00:13:54.243 12:39:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72081 00:13:54.243 12:39:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72081 ']' 00:13:54.243 12:39:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.243 12:39:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.243 12:39:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.243 12:39:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.243 12:39:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:54.243 12:39:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:54.243 [2024-12-14 12:39:53.929215] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:54.243 [2024-12-14 12:39:53.929367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72081 ] 00:13:54.504 [2024-12-14 12:39:54.095755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.504 [2024-12-14 12:39:54.216298] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.450 xnvme_bdev 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.450 12:39:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72081 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72081 ']' 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72081 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72081 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.450 killing process with pid 72081 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72081' 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72081 00:13:55.450 12:39:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72081 00:13:57.367 00:13:57.367 real 0m2.923s 00:13:57.367 user 0m2.930s 00:13:57.367 sys 0m0.464s 00:13:57.367 12:39:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.367 ************************************ 00:13:57.367 END TEST xnvme_rpc 00:13:57.367 ************************************ 00:13:57.367 12:39:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.367 12:39:56 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:57.367 12:39:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.367 12:39:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.367 12:39:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.367 ************************************ 00:13:57.367 START TEST xnvme_bdevperf 00:13:57.367 ************************************ 00:13:57.367 12:39:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:57.367 12:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:57.367 12:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:57.367 12:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:57.367 12:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:57.367 12:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:57.367 12:39:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:57.367 12:39:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:57.367 { 00:13:57.367 "subsystems": [ 00:13:57.367 { 00:13:57.367 "subsystem": "bdev", 00:13:57.367 "config": [ 00:13:57.367 { 00:13:57.367 "params": { 00:13:57.367 "io_mechanism": "io_uring", 00:13:57.367 "conserve_cpu": true, 00:13:57.367 "filename": "/dev/nvme0n1", 00:13:57.367 "name": "xnvme_bdev" 00:13:57.367 }, 00:13:57.367 "method": "bdev_xnvme_create" 00:13:57.367 }, 00:13:57.367 { 00:13:57.367 "method": "bdev_wait_for_examine" 00:13:57.367 } 00:13:57.367 ] 00:13:57.367 } 00:13:57.367 ] 00:13:57.367 } 00:13:57.367 [2024-12-14 12:39:56.894987] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:57.367 [2024-12-14 12:39:56.895146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72155 ] 00:13:57.367 [2024-12-14 12:39:57.060262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.628 [2024-12-14 12:39:57.179497] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.890 Running I/O for 5 seconds... 00:13:59.822 33892.00 IOPS, 132.39 MiB/s [2024-12-14T12:40:00.505Z] 33772.50 IOPS, 131.92 MiB/s [2024-12-14T12:40:01.889Z] 34031.00 IOPS, 132.93 MiB/s [2024-12-14T12:40:02.828Z] 34022.25 IOPS, 132.90 MiB/s [2024-12-14T12:40:02.828Z] 33952.60 IOPS, 132.63 MiB/s 00:14:03.091 Latency(us) 00:14:03.091 [2024-12-14T12:40:02.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.091 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:03.091 xnvme_bdev : 5.01 33922.54 132.51 0.00 0.00 1882.64 1127.98 10032.05 00:14:03.091 [2024-12-14T12:40:02.828Z] =================================================================================================================== 00:14:03.091 [2024-12-14T12:40:02.828Z] Total : 33922.54 132.51 0.00 0.00 1882.64 1127.98 10032.05 00:14:03.661 12:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:03.661 12:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:03.661 12:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:03.661 12:40:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:03.661 12:40:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:03.661 { 00:14:03.661 "subsystems": [ 00:14:03.661 { 00:14:03.661 "subsystem": "bdev", 00:14:03.661 "config": [ 00:14:03.661 { 00:14:03.661 "params": { 00:14:03.661 "io_mechanism": "io_uring", 00:14:03.661 "conserve_cpu": true, 00:14:03.661 "filename": "/dev/nvme0n1", 00:14:03.661 "name": "xnvme_bdev" 00:14:03.661 }, 00:14:03.661 "method": "bdev_xnvme_create" 00:14:03.661 }, 00:14:03.661 { 00:14:03.661 "method": "bdev_wait_for_examine" 00:14:03.661 } 00:14:03.661 ] 00:14:03.662 } 00:14:03.662 ] 00:14:03.662 } 00:14:03.662 [2024-12-14 12:40:03.340522] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:03.662 [2024-12-14 12:40:03.340662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72230 ] 00:14:03.922 [2024-12-14 12:40:03.497548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.922 [2024-12-14 12:40:03.618619] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.182 Running I/O for 5 seconds... 00:14:06.502 34979.00 IOPS, 136.64 MiB/s [2024-12-14T12:40:07.181Z] 34858.00 IOPS, 136.16 MiB/s [2024-12-14T12:40:08.124Z] 35246.33 IOPS, 137.68 MiB/s [2024-12-14T12:40:09.066Z] 35239.75 IOPS, 137.66 MiB/s [2024-12-14T12:40:09.066Z] 35159.80 IOPS, 137.34 MiB/s 00:14:09.329 Latency(us) 00:14:09.329 [2024-12-14T12:40:09.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:09.329 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:09.329 xnvme_bdev : 5.01 35135.48 137.25 0.00 0.00 1817.44 970.44 8418.86 00:14:09.329 [2024-12-14T12:40:09.066Z] =================================================================================================================== 00:14:09.329 [2024-12-14T12:40:09.066Z] Total : 35135.48 137.25 0.00 0.00 1817.44 970.44 8418.86 00:14:10.272 00:14:10.272 real 0m12.893s 00:14:10.272 user 0m8.069s 00:14:10.272 sys 0m4.264s 00:14:10.272 12:40:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:10.272 ************************************ 00:14:10.272 END TEST xnvme_bdevperf 00:14:10.272 ************************************ 00:14:10.272 12:40:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:10.272 12:40:09 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:10.272 12:40:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:10.272 12:40:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:10.272 12:40:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:10.272 ************************************ 00:14:10.272 START TEST xnvme_fio_plugin 00:14:10.272 ************************************ 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:10.272 12:40:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:10.272 { 00:14:10.272 "subsystems": [ 00:14:10.272 { 00:14:10.272 "subsystem": "bdev", 00:14:10.272 "config": [ 00:14:10.272 { 00:14:10.272 "params": { 00:14:10.272 "io_mechanism": "io_uring", 00:14:10.272 "conserve_cpu": true, 00:14:10.272 "filename": "/dev/nvme0n1", 00:14:10.272 "name": "xnvme_bdev" 00:14:10.272 }, 00:14:10.272 "method": "bdev_xnvme_create" 00:14:10.272 }, 00:14:10.272 { 00:14:10.272 "method": "bdev_wait_for_examine" 00:14:10.272 } 00:14:10.272 ] 00:14:10.272 } 00:14:10.272 ] 00:14:10.272 } 00:14:10.272 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:10.272 fio-3.35 00:14:10.272 Starting 1 thread 00:14:16.861 00:14:16.861 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72345: Sat Dec 14 12:40:15 2024 00:14:16.861 read: IOPS=33.2k, BW=130MiB/s (136MB/s)(649MiB/5001msec) 00:14:16.861 slat (nsec): min=2884, max=93409, avg=3641.63, stdev=1825.60 00:14:16.861 clat (usec): min=1177, max=6721, avg=1779.08, stdev=238.89 00:14:16.861 lat (usec): min=1180, max=6725, avg=1782.72, stdev=239.25 00:14:16.861 clat percentiles (usec): 00:14:16.861 | 1.00th=[ 1369], 5.00th=[ 1467], 10.00th=[ 1516], 20.00th=[ 1582], 00:14:16.861 | 30.00th=[ 1631], 40.00th=[ 1696], 50.00th=[ 1745], 60.00th=[ 1795], 00:14:16.861 | 70.00th=[ 1876], 80.00th=[ 1958], 90.00th=[ 2089], 95.00th=[ 2212], 00:14:16.861 | 99.00th=[ 2474], 99.50th=[ 2638], 99.90th=[ 3130], 99.95th=[ 3359], 00:14:16.861 | 99.99th=[ 3916] 00:14:16.861 bw ( KiB/s): min=127488, max=136192, per=99.92%, avg=132776.89, stdev=2829.55, samples=9 00:14:16.861 iops : min=31872, max=34048, avg=33194.22, stdev=707.39, samples=9 00:14:16.861 lat (msec) : 2=83.46%, 4=16.53%, 10=0.01% 00:14:16.861 cpu : usr=54.52%, sys=41.84%, ctx=15, majf=0, minf=762 00:14:16.861 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:16.861 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.861 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:16.861 issued rwts: total=166142,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.861 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:16.861 00:14:16.861 Run status group 0 (all jobs): 00:14:16.861 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=649MiB (681MB), run=5001-5001msec 00:14:17.122 ----------------------------------------------------- 00:14:17.122 Suppressions used: 00:14:17.122 count bytes template 00:14:17.122 1 11 /usr/src/fio/parse.c 00:14:17.122 1 8 libtcmalloc_minimal.so 00:14:17.122 1 904 libcrypto.so 00:14:17.122 ----------------------------------------------------- 00:14:17.122 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:17.122 12:40:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:17.122 { 00:14:17.122 "subsystems": [ 00:14:17.122 { 00:14:17.122 "subsystem": "bdev", 00:14:17.122 "config": [ 00:14:17.122 { 00:14:17.122 "params": { 00:14:17.122 "io_mechanism": "io_uring", 00:14:17.122 "conserve_cpu": true, 00:14:17.122 "filename": "/dev/nvme0n1", 00:14:17.122 "name": "xnvme_bdev" 00:14:17.122 }, 00:14:17.122 "method": "bdev_xnvme_create" 00:14:17.122 }, 00:14:17.122 { 00:14:17.122 "method": "bdev_wait_for_examine" 00:14:17.122 } 00:14:17.122 ] 00:14:17.122 } 00:14:17.122 ] 00:14:17.122 } 00:14:17.383 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:17.383 fio-3.35 00:14:17.383 Starting 1 thread 00:14:23.971 00:14:23.971 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72437: Sat Dec 14 12:40:22 2024 00:14:23.971 write: IOPS=36.3k, BW=142MiB/s (149MB/s)(709MiB/5001msec); 0 zone resets 00:14:23.971 slat (usec): min=2, max=124, avg= 3.90, stdev= 1.64 00:14:23.971 clat (usec): min=993, max=4032, avg=1611.25, stdev=260.31 00:14:23.971 lat (usec): min=997, max=4037, avg=1615.15, stdev=260.54 00:14:23.971 clat percentiles (usec): 00:14:23.971 | 1.00th=[ 1156], 5.00th=[ 1237], 10.00th=[ 1303], 20.00th=[ 1385], 00:14:23.971 | 30.00th=[ 1450], 40.00th=[ 1516], 50.00th=[ 1582], 60.00th=[ 1647], 00:14:23.971 | 70.00th=[ 1729], 80.00th=[ 1811], 90.00th=[ 1958], 95.00th=[ 2073], 00:14:23.971 | 99.00th=[ 2311], 99.50th=[ 2442], 99.90th=[ 2769], 99.95th=[ 3326], 00:14:23.971 | 99.99th=[ 3720] 00:14:23.972 bw ( KiB/s): min=141272, max=150016, per=99.86%, avg=145013.67, stdev=2826.89, samples=9 00:14:23.972 iops : min=35318, max=37504, avg=36253.33, stdev=706.82, samples=9 00:14:23.972 lat (usec) : 1000=0.01% 00:14:23.972 lat (msec) : 2=92.20%, 4=7.80%, 10=0.01% 00:14:23.972 cpu : usr=72.98%, sys=23.64%, ctx=12, majf=0, minf=763 00:14:23.972 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:23.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:23.972 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:23.972 issued rwts: total=0,181565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:23.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:23.972 00:14:23.972 Run status group 0 (all jobs): 00:14:23.972 WRITE: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=709MiB (744MB), run=5001-5001msec 00:14:23.972 ----------------------------------------------------- 00:14:23.972 Suppressions used: 00:14:23.972 count bytes template 00:14:23.972 1 11 /usr/src/fio/parse.c 00:14:23.972 1 8 libtcmalloc_minimal.so 00:14:23.972 1 904 libcrypto.so 00:14:23.972 ----------------------------------------------------- 00:14:23.972 00:14:23.972 ************************************ 00:14:23.972 END TEST xnvme_fio_plugin 00:14:23.972 ************************************ 00:14:23.972 00:14:23.972 real 0m13.812s 00:14:23.972 user 0m9.224s 00:14:23.972 sys 0m3.907s 00:14:23.972 12:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.972 12:40:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:23.972 12:40:23 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:23.972 12:40:23 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:23.972 12:40:23 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:23.972 12:40:23 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:23.972 12:40:23 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:23.972 12:40:23 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:23.972 12:40:23 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:23.972 12:40:23 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:23.972 12:40:23 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:23.972 12:40:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:23.972 12:40:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.972 12:40:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:23.972 ************************************ 00:14:23.972 START TEST xnvme_rpc 00:14:23.972 ************************************ 00:14:23.972 12:40:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:23.972 12:40:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:23.972 12:40:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:23.972 12:40:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:23.972 12:40:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:23.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.972 12:40:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72523 00:14:23.972 12:40:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72523 00:14:23.972 12:40:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72523 ']' 00:14:23.972 12:40:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.972 12:40:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:23.972 12:40:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.972 12:40:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:23.972 12:40:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:23.972 12:40:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:24.233 [2024-12-14 12:40:23.768212] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:24.234 [2024-12-14 12:40:23.768377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72523 ] 00:14:24.234 [2024-12-14 12:40:23.928765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.495 [2024-12-14 12:40:24.044128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.068 xnvme_bdev 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.068 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72523 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72523 ']' 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72523 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72523 00:14:25.330 killing process with pid 72523 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72523' 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72523 00:14:25.330 12:40:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72523 00:14:27.244 ************************************ 00:14:27.244 END TEST xnvme_rpc 00:14:27.244 ************************************ 00:14:27.244 00:14:27.244 real 0m2.937s 00:14:27.244 user 0m2.949s 00:14:27.244 sys 0m0.475s 00:14:27.244 12:40:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.244 12:40:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.244 12:40:26 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:27.244 12:40:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:27.244 12:40:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.244 12:40:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.244 ************************************ 00:14:27.244 START TEST xnvme_bdevperf 00:14:27.244 ************************************ 00:14:27.244 12:40:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:27.244 12:40:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:27.244 12:40:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:27.244 12:40:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:27.244 12:40:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:27.244 12:40:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:27.244 12:40:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:27.244 12:40:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:27.244 { 00:14:27.244 "subsystems": [ 00:14:27.244 { 00:14:27.244 "subsystem": "bdev", 00:14:27.244 "config": [ 00:14:27.244 { 00:14:27.244 "params": { 00:14:27.244 "io_mechanism": "io_uring_cmd", 00:14:27.244 "conserve_cpu": false, 00:14:27.244 "filename": "/dev/ng0n1", 00:14:27.244 "name": "xnvme_bdev" 00:14:27.244 }, 00:14:27.244 "method": "bdev_xnvme_create" 00:14:27.244 }, 00:14:27.244 { 00:14:27.244 "method": "bdev_wait_for_examine" 00:14:27.244 } 00:14:27.244 ] 00:14:27.244 } 00:14:27.244 ] 00:14:27.244 } 00:14:27.244 [2024-12-14 12:40:26.748363] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:27.244 [2024-12-14 12:40:26.748507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72597 ] 00:14:27.244 [2024-12-14 12:40:26.912167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.505 [2024-12-14 12:40:27.029341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.801 Running I/O for 5 seconds... 00:14:29.706 35773.00 IOPS, 139.74 MiB/s [2024-12-14T12:40:30.387Z] 34702.00 IOPS, 135.55 MiB/s [2024-12-14T12:40:31.330Z] 34302.33 IOPS, 133.99 MiB/s [2024-12-14T12:40:32.716Z] 34238.75 IOPS, 133.75 MiB/s 00:14:32.979 Latency(us) 00:14:32.979 [2024-12-14T12:40:32.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.979 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:32.979 xnvme_bdev : 5.00 34431.81 134.50 0.00 0.00 1854.87 453.71 13510.50 00:14:32.979 [2024-12-14T12:40:32.716Z] =================================================================================================================== 00:14:32.979 [2024-12-14T12:40:32.716Z] Total : 34431.81 134.50 0.00 0.00 1854.87 453.71 13510.50 00:14:33.551 12:40:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:33.551 12:40:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:33.551 12:40:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:33.551 12:40:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:33.551 12:40:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:33.551 { 00:14:33.551 "subsystems": [ 00:14:33.551 { 00:14:33.551 "subsystem": "bdev", 00:14:33.551 "config": [ 00:14:33.551 { 00:14:33.551 "params": { 00:14:33.551 "io_mechanism": "io_uring_cmd", 00:14:33.551 "conserve_cpu": false, 00:14:33.551 "filename": "/dev/ng0n1", 00:14:33.551 "name": "xnvme_bdev" 00:14:33.551 }, 00:14:33.551 "method": "bdev_xnvme_create" 00:14:33.551 }, 00:14:33.551 { 00:14:33.552 "method": "bdev_wait_for_examine" 00:14:33.552 } 00:14:33.552 ] 00:14:33.552 } 00:14:33.552 ] 00:14:33.552 } 00:14:33.552 [2024-12-14 12:40:33.187142] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:33.552 [2024-12-14 12:40:33.187522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72672 ] 00:14:33.812 [2024-12-14 12:40:33.356798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.812 [2024-12-14 12:40:33.478228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.073 Running I/O for 5 seconds... 00:14:36.404 34712.00 IOPS, 135.59 MiB/s [2024-12-14T12:40:37.084Z] 34619.50 IOPS, 135.23 MiB/s [2024-12-14T12:40:38.026Z] 34785.67 IOPS, 135.88 MiB/s [2024-12-14T12:40:38.969Z] 35237.00 IOPS, 137.64 MiB/s 00:14:39.232 Latency(us) 00:14:39.232 [2024-12-14T12:40:38.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.232 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:39.232 xnvme_bdev : 5.00 35240.86 137.66 0.00 0.00 1811.94 357.61 4688.34 00:14:39.232 [2024-12-14T12:40:38.969Z] =================================================================================================================== 00:14:39.232 [2024-12-14T12:40:38.970Z] Total : 35240.86 137.66 0.00 0.00 1811.94 357.61 4688.34 00:14:40.174 12:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:40.174 12:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:40.174 12:40:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:40.174 12:40:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:40.174 12:40:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:40.174 { 00:14:40.174 "subsystems": [ 00:14:40.174 { 00:14:40.174 "subsystem": "bdev", 00:14:40.174 "config": [ 00:14:40.174 { 00:14:40.174 "params": { 00:14:40.174 "io_mechanism": "io_uring_cmd", 00:14:40.174 "conserve_cpu": false, 00:14:40.174 "filename": "/dev/ng0n1", 00:14:40.174 "name": "xnvme_bdev" 00:14:40.174 }, 00:14:40.174 "method": "bdev_xnvme_create" 00:14:40.174 }, 00:14:40.174 { 00:14:40.174 "method": "bdev_wait_for_examine" 00:14:40.174 } 00:14:40.174 ] 00:14:40.174 } 00:14:40.174 ] 00:14:40.174 } 00:14:40.174 [2024-12-14 12:40:39.636350] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:40.174 [2024-12-14 12:40:39.636493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72742 ] 00:14:40.174 [2024-12-14 12:40:39.800886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.435 [2024-12-14 12:40:39.919676] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.705 Running I/O for 5 seconds... 00:14:42.591 70336.00 IOPS, 274.75 MiB/s [2024-12-14T12:40:43.269Z] 73120.00 IOPS, 285.62 MiB/s [2024-12-14T12:40:44.210Z] 75072.00 IOPS, 293.25 MiB/s [2024-12-14T12:40:45.594Z] 75504.00 IOPS, 294.94 MiB/s [2024-12-14T12:40:45.594Z] 74406.40 IOPS, 290.65 MiB/s 00:14:45.857 Latency(us) 00:14:45.857 [2024-12-14T12:40:45.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.857 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:45.857 xnvme_bdev : 5.00 74385.38 290.57 0.00 0.00 856.82 507.27 2697.06 00:14:45.857 [2024-12-14T12:40:45.594Z] =================================================================================================================== 00:14:45.857 [2024-12-14T12:40:45.594Z] Total : 74385.38 290.57 0.00 0.00 856.82 507.27 2697.06 00:14:46.117 12:40:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:46.117 12:40:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:46.117 12:40:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:46.117 12:40:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:46.117 12:40:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:46.377 { 00:14:46.377 "subsystems": [ 00:14:46.377 { 00:14:46.377 "subsystem": "bdev", 00:14:46.377 "config": [ 00:14:46.377 { 00:14:46.377 "params": { 00:14:46.377 "io_mechanism": "io_uring_cmd", 00:14:46.377 "conserve_cpu": false, 00:14:46.377 "filename": "/dev/ng0n1", 00:14:46.377 "name": "xnvme_bdev" 00:14:46.377 }, 00:14:46.377 "method": "bdev_xnvme_create" 00:14:46.377 }, 00:14:46.377 { 00:14:46.377 "method": "bdev_wait_for_examine" 00:14:46.377 } 00:14:46.377 ] 00:14:46.377 } 00:14:46.377 ] 00:14:46.377 } 00:14:46.377 [2024-12-14 12:40:45.904409] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:46.377 [2024-12-14 12:40:45.904528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72822 ] 00:14:46.377 [2024-12-14 12:40:46.063500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.637 [2024-12-14 12:40:46.138988] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.637 Running I/O for 5 seconds... 00:14:48.966 3981.00 IOPS, 15.55 MiB/s [2024-12-14T12:40:49.647Z] 2104.50 IOPS, 8.22 MiB/s [2024-12-14T12:40:50.620Z] 2096.00 IOPS, 8.19 MiB/s [2024-12-14T12:40:51.565Z] 1644.75 IOPS, 6.42 MiB/s [2024-12-14T12:40:51.565Z] 1360.40 IOPS, 5.31 MiB/s 00:14:51.828 Latency(us) 00:14:51.828 [2024-12-14T12:40:51.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.828 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:51.828 xnvme_bdev : 5.19 1323.23 5.17 0.00 0.00 47471.45 58.29 780785.82 00:14:51.828 [2024-12-14T12:40:51.565Z] =================================================================================================================== 00:14:51.828 [2024-12-14T12:40:51.565Z] Total : 1323.23 5.17 0.00 0.00 47471.45 58.29 780785.82 00:14:52.399 ************************************ 00:14:52.399 END TEST xnvme_bdevperf 00:14:52.399 ************************************ 00:14:52.399 00:14:52.399 real 0m25.393s 00:14:52.399 user 0m14.551s 00:14:52.399 sys 0m10.360s 00:14:52.399 12:40:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.399 12:40:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:52.399 12:40:52 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:52.399 12:40:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:52.399 12:40:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.399 12:40:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:52.399 ************************************ 00:14:52.399 START TEST xnvme_fio_plugin 00:14:52.399 ************************************ 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:52.399 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:52.660 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:52.660 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:52.660 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:52.660 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:52.660 12:40:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:52.660 { 00:14:52.660 "subsystems": [ 00:14:52.660 { 00:14:52.660 "subsystem": "bdev", 00:14:52.660 "config": [ 00:14:52.660 { 00:14:52.660 "params": { 00:14:52.660 "io_mechanism": "io_uring_cmd", 00:14:52.660 "conserve_cpu": false, 00:14:52.660 "filename": "/dev/ng0n1", 00:14:52.660 "name": "xnvme_bdev" 00:14:52.660 }, 00:14:52.660 "method": "bdev_xnvme_create" 00:14:52.660 }, 00:14:52.660 { 00:14:52.660 "method": "bdev_wait_for_examine" 00:14:52.660 } 00:14:52.660 ] 00:14:52.660 } 00:14:52.660 ] 00:14:52.660 } 00:14:52.660 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:52.660 fio-3.35 00:14:52.660 Starting 1 thread 00:14:59.251 00:14:59.251 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72930: Sat Dec 14 12:40:57 2024 00:14:59.251 read: IOPS=39.2k, BW=153MiB/s (160MB/s)(765MiB/5002msec) 00:14:59.251 slat (nsec): min=2889, max=91594, avg=3810.76, stdev=1871.25 00:14:59.251 clat (usec): min=589, max=3322, avg=1481.07, stdev=280.08 00:14:59.251 lat (usec): min=592, max=3351, avg=1484.88, stdev=280.38 00:14:59.251 clat percentiles (usec): 00:14:59.251 | 1.00th=[ 906], 5.00th=[ 1057], 10.00th=[ 1139], 20.00th=[ 1254], 00:14:59.251 | 30.00th=[ 1336], 40.00th=[ 1401], 50.00th=[ 1467], 60.00th=[ 1532], 00:14:59.251 | 70.00th=[ 1598], 80.00th=[ 1696], 90.00th=[ 1844], 95.00th=[ 1975], 00:14:59.251 | 99.00th=[ 2245], 99.50th=[ 2343], 99.90th=[ 2573], 99.95th=[ 2737], 00:14:59.251 | 99.99th=[ 3130] 00:14:59.252 bw ( KiB/s): min=145920, max=172032, per=100.00%, avg=158264.89, stdev=9830.01, samples=9 00:14:59.252 iops : min=36480, max=43008, avg=39566.22, stdev=2457.50, samples=9 00:14:59.252 lat (usec) : 750=0.14%, 1000=2.67% 00:14:59.252 lat (msec) : 2=92.74%, 4=4.45% 00:14:59.252 cpu : usr=34.29%, sys=64.35%, ctx=11, majf=0, minf=762 00:14:59.252 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:59.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.252 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:59.252 issued rwts: total=195958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.252 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:59.252 00:14:59.252 Run status group 0 (all jobs): 00:14:59.252 READ: bw=153MiB/s (160MB/s), 153MiB/s-153MiB/s (160MB/s-160MB/s), io=765MiB (803MB), run=5002-5002msec 00:14:59.252 ----------------------------------------------------- 00:14:59.252 Suppressions used: 00:14:59.252 count bytes template 00:14:59.252 1 11 /usr/src/fio/parse.c 00:14:59.252 1 8 libtcmalloc_minimal.so 00:14:59.252 1 904 libcrypto.so 00:14:59.252 ----------------------------------------------------- 00:14:59.252 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.252 { 00:14:59.252 "subsystems": [ 00:14:59.252 { 00:14:59.252 "subsystem": "bdev", 00:14:59.252 "config": [ 00:14:59.252 { 00:14:59.252 "params": { 00:14:59.252 "io_mechanism": "io_uring_cmd", 00:14:59.252 "conserve_cpu": false, 00:14:59.252 "filename": "/dev/ng0n1", 00:14:59.252 "name": "xnvme_bdev" 00:14:59.252 }, 00:14:59.252 "method": "bdev_xnvme_create" 00:14:59.252 }, 00:14:59.252 { 00:14:59.252 "method": "bdev_wait_for_examine" 00:14:59.252 } 00:14:59.252 ] 00:14:59.252 } 00:14:59.252 ] 00:14:59.252 } 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:59.252 12:40:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.513 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:59.513 fio-3.35 00:14:59.513 Starting 1 thread 00:15:06.104 00:15:06.104 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73026: Sat Dec 14 12:41:04 2024 00:15:06.104 write: IOPS=43.9k, BW=171MiB/s (180MB/s)(857MiB/5001msec); 0 zone resets 00:15:06.104 slat (usec): min=2, max=120, avg= 3.79, stdev= 1.18 00:15:06.104 clat (usec): min=133, max=5398, avg=1321.25, stdev=281.28 00:15:06.104 lat (usec): min=136, max=5402, avg=1325.03, stdev=281.36 00:15:06.104 clat percentiles (usec): 00:15:06.104 | 1.00th=[ 832], 5.00th=[ 996], 10.00th=[ 1045], 20.00th=[ 1106], 00:15:06.104 | 30.00th=[ 1172], 40.00th=[ 1221], 50.00th=[ 1270], 60.00th=[ 1319], 00:15:06.104 | 70.00th=[ 1401], 80.00th=[ 1516], 90.00th=[ 1680], 95.00th=[ 1827], 00:15:06.104 | 99.00th=[ 2147], 99.50th=[ 2376], 99.90th=[ 3261], 99.95th=[ 3752], 00:15:06.104 | 99.99th=[ 4424] 00:15:06.104 bw ( KiB/s): min=165984, max=180128, per=100.00%, avg=175620.44, stdev=4912.04, samples=9 00:15:06.104 iops : min=41496, max=45032, avg=43905.11, stdev=1228.01, samples=9 00:15:06.104 lat (usec) : 250=0.01%, 500=0.14%, 750=0.29%, 1000=5.03% 00:15:06.104 lat (msec) : 2=92.65%, 4=1.85%, 10=0.03% 00:15:06.104 cpu : usr=43.70%, sys=55.40%, ctx=10, majf=0, minf=763 00:15:06.105 IO depths : 1=1.4%, 2=2.8%, 4=5.7%, 8=11.6%, 16=23.7%, 32=53.0%, >=64=1.7% 00:15:06.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.105 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.5%, >=64=0.0% 00:15:06.105 issued rwts: total=0,219501,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.105 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:06.105 00:15:06.105 Run status group 0 (all jobs): 00:15:06.105 WRITE: bw=171MiB/s (180MB/s), 171MiB/s-171MiB/s (180MB/s-180MB/s), io=857MiB (899MB), run=5001-5001msec 00:15:06.105 ----------------------------------------------------- 00:15:06.105 Suppressions used: 00:15:06.105 count bytes template 00:15:06.105 1 11 /usr/src/fio/parse.c 00:15:06.105 1 8 libtcmalloc_minimal.so 00:15:06.105 1 904 libcrypto.so 00:15:06.105 ----------------------------------------------------- 00:15:06.105 00:15:06.105 ************************************ 00:15:06.105 END TEST xnvme_fio_plugin 00:15:06.105 ************************************ 00:15:06.105 00:15:06.105 real 0m13.623s 00:15:06.105 user 0m6.635s 00:15:06.105 sys 0m6.546s 00:15:06.105 12:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.105 12:41:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:06.105 12:41:05 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:06.105 12:41:05 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:06.105 12:41:05 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:06.105 12:41:05 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:06.105 12:41:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:06.105 12:41:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.105 12:41:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:06.105 ************************************ 00:15:06.105 START TEST xnvme_rpc 00:15:06.105 ************************************ 00:15:06.105 12:41:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:06.105 12:41:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:06.105 12:41:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:06.105 12:41:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:06.105 12:41:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:06.105 12:41:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73106 00:15:06.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.105 12:41:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73106 00:15:06.105 12:41:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73106 ']' 00:15:06.105 12:41:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.105 12:41:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.105 12:41:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:06.105 12:41:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.105 12:41:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.105 12:41:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.365 [2024-12-14 12:41:05.894235] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:06.365 [2024-12-14 12:41:05.894371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73106 ] 00:15:06.365 [2024-12-14 12:41:06.063611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.626 [2024-12-14 12:41:06.193296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.197 12:41:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.197 12:41:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:07.197 12:41:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:07.197 12:41:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.197 12:41:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.197 xnvme_bdev 00:15:07.197 12:41:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.197 12:41:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:07.197 12:41:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.197 12:41:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.197 12:41:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.197 12:41:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:07.197 12:41:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.458 12:41:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:07.458 12:41:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:07.458 12:41:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.458 12:41:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.458 12:41:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.458 12:41:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:07.458 12:41:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.458 12:41:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:07.458 12:41:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:07.458 12:41:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:07.458 12:41:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.458 12:41:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.458 12:41:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.458 12:41:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73106 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73106 ']' 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73106 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73106 00:15:07.458 killing process with pid 73106 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73106' 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73106 00:15:07.458 12:41:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73106 00:15:09.373 ************************************ 00:15:09.373 END TEST xnvme_rpc 00:15:09.373 ************************************ 00:15:09.373 00:15:09.373 real 0m2.925s 00:15:09.373 user 0m2.919s 00:15:09.373 sys 0m0.490s 00:15:09.373 12:41:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.373 12:41:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.373 12:41:08 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:09.373 12:41:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:09.373 12:41:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.373 12:41:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:09.373 ************************************ 00:15:09.373 START TEST xnvme_bdevperf 00:15:09.373 ************************************ 00:15:09.373 12:41:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:09.373 12:41:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:09.373 12:41:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:09.373 12:41:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:09.373 12:41:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:09.373 12:41:08 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:09.373 12:41:08 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:09.373 12:41:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:09.373 { 00:15:09.373 "subsystems": [ 00:15:09.373 { 00:15:09.373 "subsystem": "bdev", 00:15:09.373 "config": [ 00:15:09.373 { 00:15:09.373 "params": { 00:15:09.373 "io_mechanism": "io_uring_cmd", 00:15:09.373 "conserve_cpu": true, 00:15:09.373 "filename": "/dev/ng0n1", 00:15:09.373 "name": "xnvme_bdev" 00:15:09.373 }, 00:15:09.373 "method": "bdev_xnvme_create" 00:15:09.373 }, 00:15:09.373 { 00:15:09.373 "method": "bdev_wait_for_examine" 00:15:09.373 } 00:15:09.373 ] 00:15:09.373 } 00:15:09.373 ] 00:15:09.373 } 00:15:09.373 [2024-12-14 12:41:08.875978] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:09.373 [2024-12-14 12:41:08.876355] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73180 ] 00:15:09.373 [2024-12-14 12:41:09.040592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.633 [2024-12-14 12:41:09.161693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.894 Running I/O for 5 seconds... 00:15:11.779 34624.00 IOPS, 135.25 MiB/s [2024-12-14T12:41:12.460Z] 35616.00 IOPS, 139.12 MiB/s [2024-12-14T12:41:13.849Z] 35882.67 IOPS, 140.17 MiB/s [2024-12-14T12:41:14.791Z] 37056.00 IOPS, 144.75 MiB/s [2024-12-14T12:41:14.791Z] 37657.60 IOPS, 147.10 MiB/s 00:15:15.054 Latency(us) 00:15:15.054 [2024-12-14T12:41:14.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.054 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:15.054 xnvme_bdev : 5.00 37632.92 147.00 0.00 0.00 1696.40 812.90 6351.95 00:15:15.054 [2024-12-14T12:41:14.791Z] =================================================================================================================== 00:15:15.054 [2024-12-14T12:41:14.791Z] Total : 37632.92 147.00 0.00 0.00 1696.40 812.90 6351.95 00:15:15.626 12:41:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:15.626 12:41:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:15.626 12:41:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:15.626 12:41:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:15.626 12:41:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:15.626 { 00:15:15.626 "subsystems": [ 00:15:15.626 { 00:15:15.626 "subsystem": "bdev", 00:15:15.626 "config": [ 00:15:15.626 { 00:15:15.626 "params": { 00:15:15.626 "io_mechanism": "io_uring_cmd", 00:15:15.626 "conserve_cpu": true, 00:15:15.626 "filename": "/dev/ng0n1", 00:15:15.626 "name": "xnvme_bdev" 00:15:15.626 }, 00:15:15.626 "method": "bdev_xnvme_create" 00:15:15.626 }, 00:15:15.626 { 00:15:15.626 "method": "bdev_wait_for_examine" 00:15:15.626 } 00:15:15.626 ] 00:15:15.626 } 00:15:15.626 ] 00:15:15.626 } 00:15:15.626 [2024-12-14 12:41:15.328352] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:15.626 [2024-12-14 12:41:15.328490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73254 ] 00:15:15.886 [2024-12-14 12:41:15.493026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.886 [2024-12-14 12:41:15.618273] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.458 Running I/O for 5 seconds... 00:15:18.345 38580.00 IOPS, 150.70 MiB/s [2024-12-14T12:41:19.025Z] 39122.00 IOPS, 152.82 MiB/s [2024-12-14T12:41:20.012Z] 38739.00 IOPS, 151.32 MiB/s [2024-12-14T12:41:20.955Z] 38320.00 IOPS, 149.69 MiB/s [2024-12-14T12:41:20.955Z] 38189.60 IOPS, 149.18 MiB/s 00:15:21.218 Latency(us) 00:15:21.218 [2024-12-14T12:41:20.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.218 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:21.218 xnvme_bdev : 5.00 38172.79 149.11 0.00 0.00 1671.97 297.75 10788.23 00:15:21.218 [2024-12-14T12:41:20.955Z] =================================================================================================================== 00:15:21.218 [2024-12-14T12:41:20.955Z] Total : 38172.79 149.11 0.00 0.00 1671.97 297.75 10788.23 00:15:22.160 12:41:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:22.160 12:41:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:22.160 12:41:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:22.160 12:41:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:22.160 12:41:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:22.160 { 00:15:22.160 "subsystems": [ 00:15:22.160 { 00:15:22.160 "subsystem": "bdev", 00:15:22.160 "config": [ 00:15:22.160 { 00:15:22.160 "params": { 00:15:22.160 "io_mechanism": "io_uring_cmd", 00:15:22.160 "conserve_cpu": true, 00:15:22.160 "filename": "/dev/ng0n1", 00:15:22.160 "name": "xnvme_bdev" 00:15:22.160 }, 00:15:22.160 "method": "bdev_xnvme_create" 00:15:22.160 }, 00:15:22.160 { 00:15:22.160 "method": "bdev_wait_for_examine" 00:15:22.160 } 00:15:22.160 ] 00:15:22.160 } 00:15:22.160 ] 00:15:22.160 } 00:15:22.160 [2024-12-14 12:41:21.769399] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:22.160 [2024-12-14 12:41:21.769534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73323 ] 00:15:22.421 [2024-12-14 12:41:21.931504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.421 [2024-12-14 12:41:22.052613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.681 Running I/O for 5 seconds... 00:15:24.656 75200.00 IOPS, 293.75 MiB/s [2024-12-14T12:41:25.778Z] 73152.00 IOPS, 285.75 MiB/s [2024-12-14T12:41:26.349Z] 75264.00 IOPS, 294.00 MiB/s [2024-12-14T12:41:27.742Z] 78832.00 IOPS, 307.94 MiB/s [2024-12-14T12:41:27.742Z] 82380.80 IOPS, 321.80 MiB/s 00:15:28.005 Latency(us) 00:15:28.005 [2024-12-14T12:41:27.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.005 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:28.005 xnvme_bdev : 5.00 82356.72 321.71 0.00 0.00 773.59 374.94 6805.66 00:15:28.005 [2024-12-14T12:41:27.742Z] =================================================================================================================== 00:15:28.005 [2024-12-14T12:41:27.742Z] Total : 82356.72 321.71 0.00 0.00 773.59 374.94 6805.66 00:15:28.263 12:41:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:28.263 12:41:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:28.263 12:41:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:28.263 12:41:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:28.263 12:41:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:28.263 { 00:15:28.263 "subsystems": [ 00:15:28.263 { 00:15:28.263 "subsystem": "bdev", 00:15:28.263 "config": [ 00:15:28.263 { 00:15:28.263 "params": { 00:15:28.263 "io_mechanism": "io_uring_cmd", 00:15:28.263 "conserve_cpu": true, 00:15:28.263 "filename": "/dev/ng0n1", 00:15:28.263 "name": "xnvme_bdev" 00:15:28.263 }, 00:15:28.263 "method": "bdev_xnvme_create" 00:15:28.263 }, 00:15:28.264 { 00:15:28.264 "method": "bdev_wait_for_examine" 00:15:28.264 } 00:15:28.264 ] 00:15:28.264 } 00:15:28.264 ] 00:15:28.264 } 00:15:28.264 [2024-12-14 12:41:27.964448] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:28.264 [2024-12-14 12:41:27.964562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73406 ] 00:15:28.522 [2024-12-14 12:41:28.119804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.522 [2024-12-14 12:41:28.195548] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.780 Running I/O for 5 seconds... 00:15:30.658 1819.00 IOPS, 7.11 MiB/s [2024-12-14T12:41:31.776Z] 8867.50 IOPS, 34.64 MiB/s [2024-12-14T12:41:32.715Z] 20360.00 IOPS, 79.53 MiB/s [2024-12-14T12:41:33.659Z] 24628.50 IOPS, 96.21 MiB/s [2024-12-14T12:41:33.659Z] 27291.20 IOPS, 106.61 MiB/s 00:15:33.922 Latency(us) 00:15:33.922 [2024-12-14T12:41:33.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.922 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:33.922 xnvme_bdev : 5.00 27290.65 106.60 0.00 0.00 2340.30 69.71 170191.95 00:15:33.922 [2024-12-14T12:41:33.659Z] =================================================================================================================== 00:15:33.922 [2024-12-14T12:41:33.659Z] Total : 27290.65 106.60 0.00 0.00 2340.30 69.71 170191.95 00:15:34.492 00:15:34.492 real 0m25.399s 00:15:34.492 user 0m15.858s 00:15:34.492 sys 0m7.608s 00:15:34.492 12:41:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.492 12:41:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:34.492 ************************************ 00:15:34.492 END TEST xnvme_bdevperf 00:15:34.492 ************************************ 00:15:34.752 12:41:34 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:34.752 12:41:34 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:34.752 12:41:34 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.752 12:41:34 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:34.753 ************************************ 00:15:34.753 START TEST xnvme_fio_plugin 00:15:34.753 ************************************ 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:34.753 12:41:34 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:34.753 { 00:15:34.753 "subsystems": [ 00:15:34.753 { 00:15:34.753 "subsystem": "bdev", 00:15:34.753 "config": [ 00:15:34.753 { 00:15:34.753 "params": { 00:15:34.753 "io_mechanism": "io_uring_cmd", 00:15:34.753 "conserve_cpu": true, 00:15:34.753 "filename": "/dev/ng0n1", 00:15:34.753 "name": "xnvme_bdev" 00:15:34.753 }, 00:15:34.753 "method": "bdev_xnvme_create" 00:15:34.753 }, 00:15:34.753 { 00:15:34.753 "method": "bdev_wait_for_examine" 00:15:34.753 } 00:15:34.753 ] 00:15:34.753 } 00:15:34.753 ] 00:15:34.753 } 00:15:34.753 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:34.753 fio-3.35 00:15:34.753 Starting 1 thread 00:15:41.342 00:15:41.342 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73518: Sat Dec 14 12:41:40 2024 00:15:41.342 read: IOPS=37.3k, BW=146MiB/s (153MB/s)(730MiB/5002msec) 00:15:41.342 slat (nsec): min=2863, max=92686, avg=3946.85, stdev=1996.16 00:15:41.342 clat (usec): min=867, max=3604, avg=1554.17, stdev=229.55 00:15:41.342 lat (usec): min=870, max=3619, avg=1558.11, stdev=230.11 00:15:41.342 clat percentiles (usec): 00:15:41.342 | 1.00th=[ 1090], 5.00th=[ 1221], 10.00th=[ 1303], 20.00th=[ 1385], 00:15:41.342 | 30.00th=[ 1434], 40.00th=[ 1483], 50.00th=[ 1532], 60.00th=[ 1582], 00:15:41.342 | 70.00th=[ 1631], 80.00th=[ 1696], 90.00th=[ 1827], 95.00th=[ 1958], 00:15:41.342 | 99.00th=[ 2245], 99.50th=[ 2376], 99.90th=[ 2999], 99.95th=[ 3228], 00:15:41.342 | 99.99th=[ 3523] 00:15:41.342 bw ( KiB/s): min=144384, max=162816, per=100.00%, avg=149959.11, stdev=6741.33, samples=9 00:15:41.342 iops : min=36096, max=40704, avg=37489.78, stdev=1685.33, samples=9 00:15:41.342 lat (usec) : 1000=0.16% 00:15:41.342 lat (msec) : 2=95.67%, 4=4.17% 00:15:41.342 cpu : usr=44.75%, sys=51.89%, ctx=16, majf=0, minf=762 00:15:41.342 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:41.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:41.342 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:41.342 issued rwts: total=186816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:41.342 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:41.342 00:15:41.342 Run status group 0 (all jobs): 00:15:41.342 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=730MiB (765MB), run=5002-5002msec 00:15:41.603 ----------------------------------------------------- 00:15:41.603 Suppressions used: 00:15:41.603 count bytes template 00:15:41.603 1 11 /usr/src/fio/parse.c 00:15:41.603 1 8 libtcmalloc_minimal.so 00:15:41.603 1 904 libcrypto.so 00:15:41.603 ----------------------------------------------------- 00:15:41.603 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:41.603 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:41.604 12:41:41 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:41.604 { 00:15:41.604 "subsystems": [ 00:15:41.604 { 00:15:41.604 "subsystem": "bdev", 00:15:41.604 "config": [ 00:15:41.604 { 00:15:41.604 "params": { 00:15:41.604 "io_mechanism": "io_uring_cmd", 00:15:41.604 "conserve_cpu": true, 00:15:41.604 "filename": "/dev/ng0n1", 00:15:41.604 "name": "xnvme_bdev" 00:15:41.604 }, 00:15:41.604 "method": "bdev_xnvme_create" 00:15:41.604 }, 00:15:41.604 { 00:15:41.604 "method": "bdev_wait_for_examine" 00:15:41.604 } 00:15:41.604 ] 00:15:41.604 } 00:15:41.604 ] 00:15:41.604 } 00:15:41.865 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:41.865 fio-3.35 00:15:41.865 Starting 1 thread 00:15:48.450 00:15:48.450 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73609: Sat Dec 14 12:41:47 2024 00:15:48.450 write: IOPS=43.0k, BW=168MiB/s (176MB/s)(841MiB/5001msec); 0 zone resets 00:15:48.450 slat (usec): min=2, max=131, avg= 3.82, stdev= 1.94 00:15:48.450 clat (usec): min=413, max=8184, avg=1338.86, stdev=261.03 00:15:48.450 lat (usec): min=417, max=8188, avg=1342.68, stdev=261.54 00:15:48.450 clat percentiles (usec): 00:15:48.450 | 1.00th=[ 979], 5.00th=[ 1045], 10.00th=[ 1090], 20.00th=[ 1139], 00:15:48.450 | 30.00th=[ 1188], 40.00th=[ 1237], 50.00th=[ 1270], 60.00th=[ 1336], 00:15:48.450 | 70.00th=[ 1418], 80.00th=[ 1516], 90.00th=[ 1663], 95.00th=[ 1795], 00:15:48.450 | 99.00th=[ 2114], 99.50th=[ 2278], 99.90th=[ 3294], 99.95th=[ 3589], 00:15:48.450 | 99.99th=[ 6325] 00:15:48.450 bw ( KiB/s): min=148144, max=184376, per=99.36%, avg=171000.00, stdev=12761.20, samples=9 00:15:48.450 iops : min=37036, max=46094, avg=42750.00, stdev=3190.30, samples=9 00:15:48.450 lat (usec) : 500=0.01%, 750=0.08%, 1000=1.49% 00:15:48.450 lat (msec) : 2=96.70%, 4=1.70%, 10=0.03% 00:15:48.450 cpu : usr=64.36%, sys=29.52%, ctx=69, majf=0, minf=763 00:15:48.450 IO depths : 1=1.4%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.4%, >=64=1.7% 00:15:48.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.450 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:48.450 issued rwts: total=0,215178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.450 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:48.450 00:15:48.450 Run status group 0 (all jobs): 00:15:48.450 WRITE: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=841MiB (881MB), run=5001-5001msec 00:15:48.450 ----------------------------------------------------- 00:15:48.450 Suppressions used: 00:15:48.450 count bytes template 00:15:48.450 1 11 /usr/src/fio/parse.c 00:15:48.450 1 8 libtcmalloc_minimal.so 00:15:48.450 1 904 libcrypto.so 00:15:48.450 ----------------------------------------------------- 00:15:48.450 00:15:48.450 ************************************ 00:15:48.450 END TEST xnvme_fio_plugin 00:15:48.450 ************************************ 00:15:48.450 00:15:48.450 real 0m13.805s 00:15:48.450 user 0m8.308s 00:15:48.450 sys 0m4.703s 00:15:48.450 12:41:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.450 12:41:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:48.450 12:41:48 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73106 00:15:48.450 12:41:48 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73106 ']' 00:15:48.450 12:41:48 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73106 00:15:48.450 Process with pid 73106 is not found 00:15:48.450 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73106) - No such process 00:15:48.450 12:41:48 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73106 is not found' 00:15:48.450 00:15:48.450 real 3m31.061s 00:15:48.450 user 1m55.666s 00:15:48.450 sys 1m20.858s 00:15:48.450 12:41:48 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:48.450 12:41:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.450 ************************************ 00:15:48.450 END TEST nvme_xnvme 00:15:48.450 ************************************ 00:15:48.450 12:41:48 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:48.450 12:41:48 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:48.450 12:41:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:48.450 12:41:48 -- common/autotest_common.sh@10 -- # set +x 00:15:48.711 ************************************ 00:15:48.711 START TEST blockdev_xnvme 00:15:48.711 ************************************ 00:15:48.711 12:41:48 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:48.711 * Looking for test storage... 00:15:48.711 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:48.711 12:41:48 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:48.711 12:41:48 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:48.711 12:41:48 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:48.711 12:41:48 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:48.711 12:41:48 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:48.712 12:41:48 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:48.712 12:41:48 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:48.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.712 --rc genhtml_branch_coverage=1 00:15:48.712 --rc genhtml_function_coverage=1 00:15:48.712 --rc genhtml_legend=1 00:15:48.712 --rc geninfo_all_blocks=1 00:15:48.712 --rc geninfo_unexecuted_blocks=1 00:15:48.712 00:15:48.712 ' 00:15:48.712 12:41:48 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:48.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.712 --rc genhtml_branch_coverage=1 00:15:48.712 --rc genhtml_function_coverage=1 00:15:48.712 --rc genhtml_legend=1 00:15:48.712 --rc geninfo_all_blocks=1 00:15:48.712 --rc geninfo_unexecuted_blocks=1 00:15:48.712 00:15:48.712 ' 00:15:48.712 12:41:48 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:48.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.712 --rc genhtml_branch_coverage=1 00:15:48.712 --rc genhtml_function_coverage=1 00:15:48.712 --rc genhtml_legend=1 00:15:48.712 --rc geninfo_all_blocks=1 00:15:48.712 --rc geninfo_unexecuted_blocks=1 00:15:48.712 00:15:48.712 ' 00:15:48.712 12:41:48 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:48.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:48.712 --rc genhtml_branch_coverage=1 00:15:48.712 --rc genhtml_function_coverage=1 00:15:48.712 --rc genhtml_legend=1 00:15:48.712 --rc geninfo_all_blocks=1 00:15:48.712 --rc geninfo_unexecuted_blocks=1 00:15:48.712 00:15:48.712 ' 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73742 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73742 00:15:48.712 12:41:48 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73742 ']' 00:15:48.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.712 12:41:48 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.712 12:41:48 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:48.712 12:41:48 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.712 12:41:48 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:48.712 12:41:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:48.712 12:41:48 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:48.973 [2024-12-14 12:41:48.455358] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:48.973 [2024-12-14 12:41:48.455503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73742 ] 00:15:48.973 [2024-12-14 12:41:48.616663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.234 [2024-12-14 12:41:48.734308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.806 12:41:49 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:49.806 12:41:49 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:49.806 12:41:49 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:49.806 12:41:49 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:49.806 12:41:49 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:49.806 12:41:49 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:49.806 12:41:49 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:50.379 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:50.950 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:50.950 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:50.950 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:50.950 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:50.950 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:50.950 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:50.950 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:50.950 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:50.950 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:50.951 nvme0n1 00:15:50.951 nvme0n2 00:15:50.951 nvme0n3 00:15:50.951 nvme1n1 00:15:50.951 nvme2n1 00:15:50.951 nvme3n1 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.951 12:41:50 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.951 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:50.952 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "eead8981-55c2-4bf5-be7d-700879b0609e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eead8981-55c2-4bf5-be7d-700879b0609e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "f3e4d174-5888-4b09-a01a-5589c2e7ff7d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f3e4d174-5888-4b09-a01a-5589c2e7ff7d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "c6215d04-dfd1-49a9-8566-4ec48c2eb3e3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c6215d04-dfd1-49a9-8566-4ec48c2eb3e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2837a206-069f-458e-b856-55ca69c414f2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2837a206-069f-458e-b856-55ca69c414f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4afed10f-bfb4-4844-9355-cf5cb68f454b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4afed10f-bfb4-4844-9355-cf5cb68f454b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "491eab0e-f8f3-4e97-bce1-4b5acf6beda7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "491eab0e-f8f3-4e97-bce1-4b5acf6beda7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:50.952 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:51.212 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:51.212 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:51.212 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:51.212 12:41:50 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73742 00:15:51.212 12:41:50 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73742 ']' 00:15:51.212 12:41:50 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73742 00:15:51.212 12:41:50 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:51.212 12:41:50 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.212 12:41:50 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73742 00:15:51.212 12:41:50 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.212 12:41:50 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.212 12:41:50 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73742' 00:15:51.212 killing process with pid 73742 00:15:51.212 12:41:50 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73742 00:15:51.212 12:41:50 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73742 00:15:53.128 12:41:52 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:53.128 12:41:52 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:53.128 12:41:52 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:53.128 12:41:52 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.129 12:41:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.129 ************************************ 00:15:53.129 START TEST bdev_hello_world 00:15:53.129 ************************************ 00:15:53.129 12:41:52 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:53.129 [2024-12-14 12:41:52.449355] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:53.129 [2024-12-14 12:41:52.449500] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74022 ] 00:15:53.129 [2024-12-14 12:41:52.605535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.129 [2024-12-14 12:41:52.723330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.390 [2024-12-14 12:41:53.125208] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:53.390 [2024-12-14 12:41:53.125268] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:53.390 [2024-12-14 12:41:53.125287] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:53.651 [2024-12-14 12:41:53.127450] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:53.651 [2024-12-14 12:41:53.128036] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:53.651 [2024-12-14 12:41:53.128081] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:53.651 [2024-12-14 12:41:53.128607] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:53.651 00:15:53.651 [2024-12-14 12:41:53.128794] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:54.223 ************************************ 00:15:54.223 END TEST bdev_hello_world 00:15:54.223 ************************************ 00:15:54.223 00:15:54.223 real 0m1.524s 00:15:54.223 user 0m1.134s 00:15:54.223 sys 0m0.242s 00:15:54.223 12:41:53 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.223 12:41:53 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:54.223 12:41:53 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:54.223 12:41:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.223 12:41:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.483 12:41:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:54.483 ************************************ 00:15:54.483 START TEST bdev_bounds 00:15:54.483 ************************************ 00:15:54.484 Process bdevio pid: 74064 00:15:54.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.484 12:41:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:54.484 12:41:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74064 00:15:54.484 12:41:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:54.484 12:41:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:54.484 12:41:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74064' 00:15:54.484 12:41:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74064 00:15:54.484 12:41:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74064 ']' 00:15:54.484 12:41:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.484 12:41:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.484 12:41:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.484 12:41:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.484 12:41:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:54.484 [2024-12-14 12:41:54.040798] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:54.484 [2024-12-14 12:41:54.040948] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74064 ] 00:15:54.484 [2024-12-14 12:41:54.205222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:54.745 [2024-12-14 12:41:54.325581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.745 [2024-12-14 12:41:54.325901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.745 [2024-12-14 12:41:54.325905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.316 12:41:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.316 12:41:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:55.316 12:41:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:55.316 I/O targets: 00:15:55.316 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:55.316 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:55.316 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:55.316 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:55.316 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:55.316 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:55.316 00:15:55.316 00:15:55.316 CUnit - A unit testing framework for C - Version 2.1-3 00:15:55.316 http://cunit.sourceforge.net/ 00:15:55.316 00:15:55.316 00:15:55.316 Suite: bdevio tests on: nvme3n1 00:15:55.316 Test: blockdev write read block ...passed 00:15:55.316 Test: blockdev write zeroes read block ...passed 00:15:55.316 Test: blockdev write zeroes read no split ...passed 00:15:55.316 Test: blockdev write zeroes read split ...passed 00:15:55.577 Test: blockdev write zeroes read split partial ...passed 00:15:55.577 Test: blockdev reset ...passed 00:15:55.577 Test: blockdev write read 8 blocks ...passed 00:15:55.577 Test: blockdev write read size > 128k ...passed 00:15:55.577 Test: blockdev write read invalid size ...passed 00:15:55.577 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.577 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.577 Test: blockdev write read max offset ...passed 00:15:55.577 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.577 Test: blockdev writev readv 8 blocks ...passed 00:15:55.577 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.577 Test: blockdev writev readv block ...passed 00:15:55.577 Test: blockdev writev readv size > 128k ...passed 00:15:55.577 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.577 Test: blockdev comparev and writev ...passed 00:15:55.577 Test: blockdev nvme passthru rw ...passed 00:15:55.577 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.577 Test: blockdev nvme admin passthru ...passed 00:15:55.577 Test: blockdev copy ...passed 00:15:55.577 Suite: bdevio tests on: nvme2n1 00:15:55.577 Test: blockdev write read block ...passed 00:15:55.577 Test: blockdev write zeroes read block ...passed 00:15:55.577 Test: blockdev write zeroes read no split ...passed 00:15:55.577 Test: blockdev write zeroes read split ...passed 00:15:55.577 Test: blockdev write zeroes read split partial ...passed 00:15:55.577 Test: blockdev reset ...passed 00:15:55.577 Test: blockdev write read 8 blocks ...passed 00:15:55.577 Test: blockdev write read size > 128k ...passed 00:15:55.577 Test: blockdev write read invalid size ...passed 00:15:55.577 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.577 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.577 Test: blockdev write read max offset ...passed 00:15:55.577 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.577 Test: blockdev writev readv 8 blocks ...passed 00:15:55.577 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.577 Test: blockdev writev readv block ...passed 00:15:55.577 Test: blockdev writev readv size > 128k ...passed 00:15:55.577 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.577 Test: blockdev comparev and writev ...passed 00:15:55.577 Test: blockdev nvme passthru rw ...passed 00:15:55.577 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.577 Test: blockdev nvme admin passthru ...passed 00:15:55.577 Test: blockdev copy ...passed 00:15:55.577 Suite: bdevio tests on: nvme1n1 00:15:55.577 Test: blockdev write read block ...passed 00:15:55.577 Test: blockdev write zeroes read block ...passed 00:15:55.577 Test: blockdev write zeroes read no split ...passed 00:15:55.577 Test: blockdev write zeroes read split ...passed 00:15:55.577 Test: blockdev write zeroes read split partial ...passed 00:15:55.577 Test: blockdev reset ...passed 00:15:55.577 Test: blockdev write read 8 blocks ...passed 00:15:55.577 Test: blockdev write read size > 128k ...passed 00:15:55.577 Test: blockdev write read invalid size ...passed 00:15:55.577 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.577 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.577 Test: blockdev write read max offset ...passed 00:15:55.577 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.577 Test: blockdev writev readv 8 blocks ...passed 00:15:55.577 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.577 Test: blockdev writev readv block ...passed 00:15:55.577 Test: blockdev writev readv size > 128k ...passed 00:15:55.577 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.577 Test: blockdev comparev and writev ...passed 00:15:55.577 Test: blockdev nvme passthru rw ...passed 00:15:55.577 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.577 Test: blockdev nvme admin passthru ...passed 00:15:55.577 Test: blockdev copy ...passed 00:15:55.577 Suite: bdevio tests on: nvme0n3 00:15:55.577 Test: blockdev write read block ...passed 00:15:55.577 Test: blockdev write zeroes read block ...passed 00:15:55.577 Test: blockdev write zeroes read no split ...passed 00:15:55.577 Test: blockdev write zeroes read split ...passed 00:15:55.838 Test: blockdev write zeroes read split partial ...passed 00:15:55.838 Test: blockdev reset ...passed 00:15:55.838 Test: blockdev write read 8 blocks ...passed 00:15:55.838 Test: blockdev write read size > 128k ...passed 00:15:55.838 Test: blockdev write read invalid size ...passed 00:15:55.838 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.838 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.838 Test: blockdev write read max offset ...passed 00:15:55.838 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.838 Test: blockdev writev readv 8 blocks ...passed 00:15:55.838 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.838 Test: blockdev writev readv block ...passed 00:15:55.838 Test: blockdev writev readv size > 128k ...passed 00:15:55.838 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.838 Test: blockdev comparev and writev ...passed 00:15:55.838 Test: blockdev nvme passthru rw ...passed 00:15:55.838 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.838 Test: blockdev nvme admin passthru ...passed 00:15:55.838 Test: blockdev copy ...passed 00:15:55.838 Suite: bdevio tests on: nvme0n2 00:15:55.838 Test: blockdev write read block ...passed 00:15:55.838 Test: blockdev write zeroes read block ...passed 00:15:55.838 Test: blockdev write zeroes read no split ...passed 00:15:55.838 Test: blockdev write zeroes read split ...passed 00:15:55.838 Test: blockdev write zeroes read split partial ...passed 00:15:55.838 Test: blockdev reset ...passed 00:15:55.838 Test: blockdev write read 8 blocks ...passed 00:15:55.838 Test: blockdev write read size > 128k ...passed 00:15:55.838 Test: blockdev write read invalid size ...passed 00:15:55.838 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.838 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.838 Test: blockdev write read max offset ...passed 00:15:55.838 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.838 Test: blockdev writev readv 8 blocks ...passed 00:15:55.838 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.838 Test: blockdev writev readv block ...passed 00:15:55.838 Test: blockdev writev readv size > 128k ...passed 00:15:55.838 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.838 Test: blockdev comparev and writev ...passed 00:15:55.838 Test: blockdev nvme passthru rw ...passed 00:15:55.838 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.838 Test: blockdev nvme admin passthru ...passed 00:15:55.838 Test: blockdev copy ...passed 00:15:55.838 Suite: bdevio tests on: nvme0n1 00:15:55.838 Test: blockdev write read block ...passed 00:15:55.838 Test: blockdev write zeroes read block ...passed 00:15:55.838 Test: blockdev write zeroes read no split ...passed 00:15:55.838 Test: blockdev write zeroes read split ...passed 00:15:55.838 Test: blockdev write zeroes read split partial ...passed 00:15:55.838 Test: blockdev reset ...passed 00:15:55.838 Test: blockdev write read 8 blocks ...passed 00:15:55.838 Test: blockdev write read size > 128k ...passed 00:15:55.838 Test: blockdev write read invalid size ...passed 00:15:55.838 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:55.838 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:55.838 Test: blockdev write read max offset ...passed 00:15:55.838 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:55.838 Test: blockdev writev readv 8 blocks ...passed 00:15:55.838 Test: blockdev writev readv 30 x 1block ...passed 00:15:55.838 Test: blockdev writev readv block ...passed 00:15:55.838 Test: blockdev writev readv size > 128k ...passed 00:15:55.838 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:55.838 Test: blockdev comparev and writev ...passed 00:15:55.838 Test: blockdev nvme passthru rw ...passed 00:15:55.838 Test: blockdev nvme passthru vendor specific ...passed 00:15:55.838 Test: blockdev nvme admin passthru ...passed 00:15:55.838 Test: blockdev copy ...passed 00:15:55.838 00:15:55.838 Run Summary: Type Total Ran Passed Failed Inactive 00:15:55.838 suites 6 6 n/a 0 0 00:15:55.838 tests 138 138 138 0 0 00:15:55.838 asserts 780 780 780 0 n/a 00:15:55.838 00:15:55.838 Elapsed time = 1.251 seconds 00:15:55.838 0 00:15:55.838 12:41:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74064 00:15:55.838 12:41:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74064 ']' 00:15:55.838 12:41:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74064 00:15:55.838 12:41:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:55.838 12:41:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.838 12:41:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74064 00:15:55.838 12:41:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.838 12:41:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.838 12:41:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74064' 00:15:55.838 killing process with pid 74064 00:15:55.838 12:41:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74064 00:15:55.838 12:41:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74064 00:15:56.781 12:41:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:56.781 00:15:56.781 real 0m2.494s 00:15:56.781 user 0m6.070s 00:15:56.781 sys 0m0.387s 00:15:56.781 12:41:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.781 ************************************ 00:15:56.781 END TEST bdev_bounds 00:15:56.781 ************************************ 00:15:56.781 12:41:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:57.043 12:41:56 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:57.043 12:41:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:57.043 12:41:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.043 12:41:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:57.043 ************************************ 00:15:57.043 START TEST bdev_nbd 00:15:57.043 ************************************ 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74122 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74122 /var/tmp/spdk-nbd.sock 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74122 ']' 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:57.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.043 12:41:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:57.043 [2024-12-14 12:41:56.619757] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:57.043 [2024-12-14 12:41:56.620110] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.305 [2024-12-14 12:41:56.785380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.305 [2024-12-14 12:41:56.938448] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:57.878 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.140 1+0 records in 00:15:58.140 1+0 records out 00:15:58.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000987832 s, 4.1 MB/s 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.140 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.402 1+0 records in 00:15:58.402 1+0 records out 00:15:58.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000846653 s, 4.8 MB/s 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.402 12:41:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:58.663 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:58.663 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:58.663 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:58.663 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:58.663 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.663 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.663 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.663 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:58.664 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.664 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.664 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.664 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.664 1+0 records in 00:15:58.664 1+0 records out 00:15:58.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000962901 s, 4.3 MB/s 00:15:58.664 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.664 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.664 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.664 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.664 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.664 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.664 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.664 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:58.925 1+0 records in 00:15:58.925 1+0 records out 00:15:58.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00371866 s, 1.1 MB/s 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:58.925 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.186 1+0 records in 00:15:59.186 1+0 records out 00:15:59.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123468 s, 3.3 MB/s 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:59.186 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:59.446 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:59.446 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:59.446 12:41:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:59.446 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:59.446 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:59.446 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.446 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.446 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:59.446 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:59.447 12:41:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.447 12:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.447 12:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.447 1+0 records in 00:15:59.447 1+0 records out 00:15:59.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011859 s, 3.5 MB/s 00:15:59.447 12:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.447 12:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:59.447 12:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.447 12:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.447 12:41:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:59.447 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:59.447 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:59.447 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:59.708 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:59.708 { 00:15:59.708 "nbd_device": "/dev/nbd0", 00:15:59.708 "bdev_name": "nvme0n1" 00:15:59.708 }, 00:15:59.708 { 00:15:59.708 "nbd_device": "/dev/nbd1", 00:15:59.708 "bdev_name": "nvme0n2" 00:15:59.708 }, 00:15:59.708 { 00:15:59.708 "nbd_device": "/dev/nbd2", 00:15:59.708 "bdev_name": "nvme0n3" 00:15:59.708 }, 00:15:59.708 { 00:15:59.708 "nbd_device": "/dev/nbd3", 00:15:59.708 "bdev_name": "nvme1n1" 00:15:59.708 }, 00:15:59.708 { 00:15:59.708 "nbd_device": "/dev/nbd4", 00:15:59.708 "bdev_name": "nvme2n1" 00:15:59.708 }, 00:15:59.708 { 00:15:59.708 "nbd_device": "/dev/nbd5", 00:15:59.708 "bdev_name": "nvme3n1" 00:15:59.708 } 00:15:59.708 ]' 00:15:59.708 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:59.708 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:59.708 { 00:15:59.708 "nbd_device": "/dev/nbd0", 00:15:59.708 "bdev_name": "nvme0n1" 00:15:59.708 }, 00:15:59.708 { 00:15:59.708 "nbd_device": "/dev/nbd1", 00:15:59.708 "bdev_name": "nvme0n2" 00:15:59.708 }, 00:15:59.708 { 00:15:59.708 "nbd_device": "/dev/nbd2", 00:15:59.708 "bdev_name": "nvme0n3" 00:15:59.708 }, 00:15:59.708 { 00:15:59.708 "nbd_device": "/dev/nbd3", 00:15:59.708 "bdev_name": "nvme1n1" 00:15:59.708 }, 00:15:59.708 { 00:15:59.708 "nbd_device": "/dev/nbd4", 00:15:59.708 "bdev_name": "nvme2n1" 00:15:59.708 }, 00:15:59.708 { 00:15:59.708 "nbd_device": "/dev/nbd5", 00:15:59.708 "bdev_name": "nvme3n1" 00:15:59.708 } 00:15:59.708 ]' 00:15:59.708 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:59.708 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:59.708 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.708 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:59.708 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:59.708 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:59.708 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.708 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.969 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:00.231 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:00.231 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:00.231 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:00.231 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.231 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.231 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:00.231 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.231 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.231 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.231 12:41:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:00.492 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:00.492 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:00.492 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:00.492 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.492 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.492 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:00.492 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.492 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.492 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.492 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:00.752 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:00.752 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:00.752 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:00.752 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:00.752 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:00.752 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:00.752 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:00.752 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:00.752 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.752 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:01.014 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:01.014 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:01.014 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:01.014 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.014 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.014 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:01.014 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:01.014 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.014 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:01.014 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.014 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:01.273 12:42:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:01.532 /dev/nbd0 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.532 1+0 records in 00:16:01.532 1+0 records out 00:16:01.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587186 s, 7.0 MB/s 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:01.532 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:01.790 /dev/nbd1 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.790 1+0 records in 00:16:01.790 1+0 records out 00:16:01.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000873776 s, 4.7 MB/s 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:01.790 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:02.050 /dev/nbd10 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.050 1+0 records in 00:16:02.050 1+0 records out 00:16:02.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731632 s, 5.6 MB/s 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.050 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:02.311 /dev/nbd11 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.311 1+0 records in 00:16:02.311 1+0 records out 00:16:02.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00137666 s, 3.0 MB/s 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.311 12:42:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:02.311 /dev/nbd12 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.571 1+0 records in 00:16:02.571 1+0 records out 00:16:02.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000945397 s, 4.3 MB/s 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.571 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:02.571 /dev/nbd13 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:02.832 1+0 records in 00:16:02.832 1+0 records out 00:16:02.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112072 s, 3.7 MB/s 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:02.832 { 00:16:02.832 "nbd_device": "/dev/nbd0", 00:16:02.832 "bdev_name": "nvme0n1" 00:16:02.832 }, 00:16:02.832 { 00:16:02.832 "nbd_device": "/dev/nbd1", 00:16:02.832 "bdev_name": "nvme0n2" 00:16:02.832 }, 00:16:02.832 { 00:16:02.832 "nbd_device": "/dev/nbd10", 00:16:02.832 "bdev_name": "nvme0n3" 00:16:02.832 }, 00:16:02.832 { 00:16:02.832 "nbd_device": "/dev/nbd11", 00:16:02.832 "bdev_name": "nvme1n1" 00:16:02.832 }, 00:16:02.832 { 00:16:02.832 "nbd_device": "/dev/nbd12", 00:16:02.832 "bdev_name": "nvme2n1" 00:16:02.832 }, 00:16:02.832 { 00:16:02.832 "nbd_device": "/dev/nbd13", 00:16:02.832 "bdev_name": "nvme3n1" 00:16:02.832 } 00:16:02.832 ]' 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:02.832 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:02.832 { 00:16:02.832 "nbd_device": "/dev/nbd0", 00:16:02.832 "bdev_name": "nvme0n1" 00:16:02.832 }, 00:16:02.832 { 00:16:02.832 "nbd_device": "/dev/nbd1", 00:16:02.832 "bdev_name": "nvme0n2" 00:16:02.832 }, 00:16:02.832 { 00:16:02.832 "nbd_device": "/dev/nbd10", 00:16:02.832 "bdev_name": "nvme0n3" 00:16:02.832 }, 00:16:02.832 { 00:16:02.832 "nbd_device": "/dev/nbd11", 00:16:02.832 "bdev_name": "nvme1n1" 00:16:02.832 }, 00:16:02.832 { 00:16:02.832 "nbd_device": "/dev/nbd12", 00:16:02.832 "bdev_name": "nvme2n1" 00:16:02.832 }, 00:16:02.832 { 00:16:02.832 "nbd_device": "/dev/nbd13", 00:16:02.832 "bdev_name": "nvme3n1" 00:16:02.832 } 00:16:02.832 ]' 00:16:03.091 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:03.091 /dev/nbd1 00:16:03.091 /dev/nbd10 00:16:03.091 /dev/nbd11 00:16:03.091 /dev/nbd12 00:16:03.091 /dev/nbd13' 00:16:03.091 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:03.091 /dev/nbd1 00:16:03.091 /dev/nbd10 00:16:03.091 /dev/nbd11 00:16:03.091 /dev/nbd12 00:16:03.091 /dev/nbd13' 00:16:03.091 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:03.091 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:03.091 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:03.091 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:03.091 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:03.092 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:03.092 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:03.092 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:03.092 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:03.092 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:03.092 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:03.092 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:03.092 256+0 records in 00:16:03.092 256+0 records out 00:16:03.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00661238 s, 159 MB/s 00:16:03.092 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.092 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:03.351 256+0 records in 00:16:03.351 256+0 records out 00:16:03.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.246204 s, 4.3 MB/s 00:16:03.351 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.351 12:42:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:03.612 256+0 records in 00:16:03.612 256+0 records out 00:16:03.612 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.24539 s, 4.3 MB/s 00:16:03.612 12:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.612 12:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:03.873 256+0 records in 00:16:03.873 256+0 records out 00:16:03.873 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247007 s, 4.2 MB/s 00:16:03.873 12:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:03.873 12:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:04.133 256+0 records in 00:16:04.133 256+0 records out 00:16:04.133 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.317187 s, 3.3 MB/s 00:16:04.133 12:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:04.133 12:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:04.394 256+0 records in 00:16:04.394 256+0 records out 00:16:04.394 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.250806 s, 4.2 MB/s 00:16:04.394 12:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:04.394 12:42:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:04.655 256+0 records in 00:16:04.655 256+0 records out 00:16:04.655 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.242465 s, 4.3 MB/s 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.656 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:04.914 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:04.914 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:04.914 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:04.914 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.914 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.914 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:04.914 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:04.914 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.914 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.915 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.173 12:42:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:05.431 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:05.431 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:05.431 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:05.431 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.431 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.431 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:05.431 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.431 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.431 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.431 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:05.688 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:05.688 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:05.689 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:05.689 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.689 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.689 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:05.689 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.689 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.689 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.689 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:05.949 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:05.949 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:05.949 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:05.949 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.949 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.949 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:05.949 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:05.949 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.949 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:05.949 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:05.949 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:06.208 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:06.468 malloc_lvol_verify 00:16:06.468 12:42:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:06.468 babcc527-e478-4860-9314-5fdd2b570ceb 00:16:06.468 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:06.729 d3c795e9-89e1-4cdb-9d8c-1c186edb4124 00:16:06.729 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:06.990 /dev/nbd0 00:16:06.990 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:06.990 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:06.990 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:06.990 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:06.990 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:06.990 mke2fs 1.47.0 (5-Feb-2023) 00:16:06.990 Discarding device blocks: 0/4096 done 00:16:06.990 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:06.990 00:16:06.990 Allocating group tables: 0/1 done 00:16:06.990 Writing inode tables: 0/1 done 00:16:06.990 Creating journal (1024 blocks): done 00:16:06.990 Writing superblocks and filesystem accounting information: 0/1 done 00:16:06.990 00:16:06.990 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:06.990 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:06.990 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:06.990 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.990 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:06.990 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.990 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74122 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74122 ']' 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74122 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74122 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:07.251 killing process with pid 74122 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74122' 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74122 00:16:07.251 12:42:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74122 00:16:08.192 12:42:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:08.192 00:16:08.192 real 0m11.031s 00:16:08.192 user 0m14.544s 00:16:08.192 sys 0m3.945s 00:16:08.192 12:42:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.192 12:42:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:08.192 ************************************ 00:16:08.192 END TEST bdev_nbd 00:16:08.192 ************************************ 00:16:08.192 12:42:07 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:16:08.192 12:42:07 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:16:08.192 12:42:07 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:16:08.192 12:42:07 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:16:08.192 12:42:07 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:08.192 12:42:07 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.192 12:42:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:08.192 ************************************ 00:16:08.192 START TEST bdev_fio 00:16:08.192 ************************************ 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:08.192 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.192 12:42:07 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:08.192 ************************************ 00:16:08.193 START TEST bdev_fio_rw_verify 00:16:08.193 ************************************ 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:08.193 12:42:07 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:08.193 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.193 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.193 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.193 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.193 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.193 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:08.193 fio-3.35 00:16:08.193 Starting 6 threads 00:16:20.492 00:16:20.492 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74535: Sat Dec 14 12:42:18 2024 00:16:20.492 read: IOPS=18.2k, BW=71.0MiB/s (74.4MB/s)(710MiB/10002msec) 00:16:20.492 slat (usec): min=2, max=3590, avg= 6.12, stdev=19.78 00:16:20.492 clat (usec): min=73, max=7491, avg=1027.00, stdev=709.94 00:16:20.492 lat (usec): min=77, max=7513, avg=1033.12, stdev=710.79 00:16:20.492 clat percentiles (usec): 00:16:20.492 | 50.000th=[ 865], 99.000th=[ 3294], 99.900th=[ 4752], 99.990th=[ 6456], 00:16:20.492 | 99.999th=[ 7504] 00:16:20.492 write: IOPS=18.4k, BW=71.9MiB/s (75.4MB/s)(719MiB/10002msec); 0 zone resets 00:16:20.492 slat (usec): min=10, max=3752, avg=37.24, stdev=121.65 00:16:20.492 clat (usec): min=67, max=8184, avg=1294.24, stdev=790.23 00:16:20.492 lat (usec): min=81, max=8215, avg=1331.48, stdev=802.64 00:16:20.492 clat percentiles (usec): 00:16:20.492 | 50.000th=[ 1139], 99.000th=[ 3785], 99.900th=[ 5276], 99.990th=[ 6325], 00:16:20.492 | 99.999th=[ 8160] 00:16:20.492 bw ( KiB/s): min=49149, max=111736, per=100.00%, avg=74826.11, stdev=3215.03, samples=114 00:16:20.492 iops : min=12285, max=27934, avg=18705.32, stdev=803.84, samples=114 00:16:20.492 lat (usec) : 100=0.03%, 250=5.06%, 500=14.01%, 750=16.93%, 1000=13.78% 00:16:20.492 lat (msec) : 2=36.96%, 4=12.74%, 10=0.50% 00:16:20.492 cpu : usr=40.10%, sys=34.14%, ctx=6353, majf=0, minf=17188 00:16:20.492 IO depths : 1=11.2%, 2=23.7%, 4=51.3%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.492 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.492 issued rwts: total=181724,184126,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.492 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:20.492 00:16:20.492 Run status group 0 (all jobs): 00:16:20.492 READ: bw=71.0MiB/s (74.4MB/s), 71.0MiB/s-71.0MiB/s (74.4MB/s-74.4MB/s), io=710MiB (744MB), run=10002-10002msec 00:16:20.492 WRITE: bw=71.9MiB/s (75.4MB/s), 71.9MiB/s-71.9MiB/s (75.4MB/s-75.4MB/s), io=719MiB (754MB), run=10002-10002msec 00:16:20.492 ----------------------------------------------------- 00:16:20.492 Suppressions used: 00:16:20.492 count bytes template 00:16:20.492 6 48 /usr/src/fio/parse.c 00:16:20.492 2280 218880 /usr/src/fio/iolog.c 00:16:20.492 1 8 libtcmalloc_minimal.so 00:16:20.492 1 904 libcrypto.so 00:16:20.492 ----------------------------------------------------- 00:16:20.492 00:16:20.492 00:16:20.492 real 0m11.899s 00:16:20.492 user 0m25.551s 00:16:20.492 sys 0m20.784s 00:16:20.492 ************************************ 00:16:20.492 END TEST bdev_fio_rw_verify 00:16:20.492 ************************************ 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "eead8981-55c2-4bf5-be7d-700879b0609e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eead8981-55c2-4bf5-be7d-700879b0609e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "f3e4d174-5888-4b09-a01a-5589c2e7ff7d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f3e4d174-5888-4b09-a01a-5589c2e7ff7d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "c6215d04-dfd1-49a9-8566-4ec48c2eb3e3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c6215d04-dfd1-49a9-8566-4ec48c2eb3e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2837a206-069f-458e-b856-55ca69c414f2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2837a206-069f-458e-b856-55ca69c414f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "4afed10f-bfb4-4844-9355-cf5cb68f454b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "4afed10f-bfb4-4844-9355-cf5cb68f454b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "491eab0e-f8f3-4e97-bce1-4b5acf6beda7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "491eab0e-f8f3-4e97-bce1-4b5acf6beda7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:20.492 /home/vagrant/spdk_repo/spdk 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:20.492 12:42:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:20.492 00:16:20.492 real 0m12.068s 00:16:20.493 user 0m25.628s 00:16:20.493 sys 0m20.851s 00:16:20.493 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.493 ************************************ 00:16:20.493 END TEST bdev_fio 00:16:20.493 ************************************ 00:16:20.493 12:42:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:20.493 12:42:19 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:20.493 12:42:19 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:20.493 12:42:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:20.493 12:42:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.493 12:42:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:20.493 ************************************ 00:16:20.493 START TEST bdev_verify 00:16:20.493 ************************************ 00:16:20.493 12:42:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:20.493 [2024-12-14 12:42:19.837607] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:20.493 [2024-12-14 12:42:19.837756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74704 ] 00:16:20.493 [2024-12-14 12:42:19.994445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:20.493 [2024-12-14 12:42:20.114116] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.493 [2024-12-14 12:42:20.114133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.066 Running I/O for 5 seconds... 00:16:23.396 23840.00 IOPS, 93.12 MiB/s [2024-12-14T12:42:24.078Z] 22768.00 IOPS, 88.94 MiB/s [2024-12-14T12:42:25.022Z] 22442.67 IOPS, 87.67 MiB/s [2024-12-14T12:42:25.969Z] 22600.00 IOPS, 88.28 MiB/s [2024-12-14T12:42:25.969Z] 22566.40 IOPS, 88.15 MiB/s 00:16:26.232 Latency(us) 00:16:26.232 [2024-12-14T12:42:25.969Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.232 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:26.232 Verification LBA range: start 0x0 length 0x80000 00:16:26.232 nvme0n1 : 5.04 1725.58 6.74 0.00 0.00 74041.76 9427.10 67754.14 00:16:26.232 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:26.232 Verification LBA range: start 0x80000 length 0x80000 00:16:26.232 nvme0n1 : 5.05 1774.61 6.93 0.00 0.00 71994.66 9981.64 74610.22 00:16:26.232 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:26.232 Verification LBA range: start 0x0 length 0x80000 00:16:26.232 nvme0n2 : 5.08 1712.40 6.69 0.00 0.00 74466.26 8922.98 75013.51 00:16:26.232 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:26.232 Verification LBA range: start 0x80000 length 0x80000 00:16:26.232 nvme0n2 : 5.07 1766.22 6.90 0.00 0.00 72186.33 8015.56 74206.92 00:16:26.232 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:26.232 Verification LBA range: start 0x0 length 0x80000 00:16:26.232 nvme0n3 : 5.08 1711.88 6.69 0.00 0.00 74350.86 8267.62 73400.32 00:16:26.232 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:26.232 Verification LBA range: start 0x80000 length 0x80000 00:16:26.232 nvme0n3 : 5.06 1772.09 6.92 0.00 0.00 71801.82 11342.77 84289.38 00:16:26.232 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:26.232 Verification LBA range: start 0x0 length 0xbd0bd 00:16:26.232 nvme1n1 : 5.08 2431.64 9.50 0.00 0.00 52144.44 5646.18 69367.34 00:16:26.232 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:26.232 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:26.232 nvme1n1 : 5.08 2419.92 9.45 0.00 0.00 52456.71 5444.53 64527.75 00:16:26.232 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:26.232 Verification LBA range: start 0x0 length 0xa0000 00:16:26.232 nvme2n1 : 5.08 1738.12 6.79 0.00 0.00 72961.04 7410.61 70980.53 00:16:26.232 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:26.232 Verification LBA range: start 0xa0000 length 0xa0000 00:16:26.232 nvme2n1 : 5.06 1822.07 7.12 0.00 0.00 69360.09 10233.70 68157.44 00:16:26.232 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:26.232 Verification LBA range: start 0x0 length 0x20000 00:16:26.232 nvme3n1 : 5.07 1715.73 6.70 0.00 0.00 73679.67 3982.57 73803.62 00:16:26.232 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:26.232 Verification LBA range: start 0x20000 length 0x20000 00:16:26.232 nvme3n1 : 5.09 1787.04 6.98 0.00 0.00 70603.38 4486.70 63317.86 00:16:26.232 [2024-12-14T12:42:25.969Z] =================================================================================================================== 00:16:26.232 [2024-12-14T12:42:25.969Z] Total : 22377.31 87.41 0.00 0.00 68124.40 3982.57 84289.38 00:16:26.804 00:16:26.804 real 0m6.724s 00:16:26.804 user 0m10.781s 00:16:26.804 sys 0m1.522s 00:16:26.804 12:42:26 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.804 12:42:26 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:26.804 ************************************ 00:16:26.804 END TEST bdev_verify 00:16:26.804 ************************************ 00:16:27.066 12:42:26 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:27.066 12:42:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:27.066 12:42:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.066 12:42:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:27.066 ************************************ 00:16:27.066 START TEST bdev_verify_big_io 00:16:27.066 ************************************ 00:16:27.066 12:42:26 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:27.066 [2024-12-14 12:42:26.647405] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:27.066 [2024-12-14 12:42:26.647563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74803 ] 00:16:27.327 [2024-12-14 12:42:26.817091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:27.327 [2024-12-14 12:42:26.934669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.327 [2024-12-14 12:42:26.934782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.899 Running I/O for 5 seconds... 00:16:33.747 896.00 IOPS, 56.00 MiB/s [2024-12-14T12:42:33.484Z] 2693.50 IOPS, 168.34 MiB/s [2024-12-14T12:42:33.484Z] 3024.33 IOPS, 189.02 MiB/s 00:16:33.747 Latency(us) 00:16:33.747 [2024-12-14T12:42:33.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.747 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:33.747 Verification LBA range: start 0x0 length 0x8000 00:16:33.747 nvme0n1 : 5.84 109.62 6.85 0.00 0.00 1148413.16 205682.22 1058255.16 00:16:33.747 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:33.747 Verification LBA range: start 0x8000 length 0x8000 00:16:33.747 nvme0n1 : 5.76 127.88 7.99 0.00 0.00 943378.87 6906.49 1284102.30 00:16:33.747 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:33.747 Verification LBA range: start 0x0 length 0x8000 00:16:33.747 nvme0n2 : 5.85 84.82 5.30 0.00 0.00 1429533.68 77030.01 3032804.43 00:16:33.747 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:33.747 Verification LBA range: start 0x8000 length 0x8000 00:16:33.747 nvme0n2 : 5.81 132.18 8.26 0.00 0.00 906214.79 163739.18 1187310.67 00:16:33.747 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:33.747 Verification LBA range: start 0x0 length 0x8000 00:16:33.747 nvme0n3 : 5.85 120.35 7.52 0.00 0.00 975281.37 8570.09 916294.10 00:16:33.747 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:33.747 Verification LBA range: start 0x8000 length 0x8000 00:16:33.747 nvme0n3 : 5.86 128.25 8.02 0.00 0.00 912256.29 104051.00 1271196.75 00:16:33.747 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:33.747 Verification LBA range: start 0x0 length 0xbd0b 00:16:33.748 nvme1n1 : 5.91 127.24 7.95 0.00 0.00 887220.15 72997.02 1251838.42 00:16:33.748 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:33.748 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:33.748 nvme1n1 : 5.87 171.78 10.74 0.00 0.00 656145.89 48395.82 1522854.99 00:16:33.748 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:33.748 Verification LBA range: start 0x0 length 0xa000 00:16:33.748 nvme2n1 : 5.91 100.13 6.26 0.00 0.00 1101299.78 64931.05 1129235.69 00:16:33.748 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:33.748 Verification LBA range: start 0xa000 length 0xa000 00:16:33.748 nvme2n1 : 5.92 126.72 7.92 0.00 0.00 867018.29 51017.26 2568204.60 00:16:33.748 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:33.748 Verification LBA range: start 0x0 length 0x2000 00:16:33.748 nvme3n1 : 5.92 143.29 8.96 0.00 0.00 746141.05 1354.83 1019538.51 00:16:33.748 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:33.748 Verification LBA range: start 0x2000 length 0x2000 00:16:33.748 nvme3n1 : 5.96 206.02 12.88 0.00 0.00 518094.01 523.03 600108.11 00:16:33.748 [2024-12-14T12:42:33.485Z] =================================================================================================================== 00:16:33.748 [2024-12-14T12:42:33.485Z] Total : 1578.28 98.64 0.00 0.00 874171.74 523.03 3032804.43 00:16:34.688 00:16:34.688 real 0m7.582s 00:16:34.688 user 0m13.856s 00:16:34.688 sys 0m0.471s 00:16:34.688 12:42:34 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.688 ************************************ 00:16:34.688 END TEST bdev_verify_big_io 00:16:34.688 ************************************ 00:16:34.688 12:42:34 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:34.688 12:42:34 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:34.688 12:42:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:34.688 12:42:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.688 12:42:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:34.688 ************************************ 00:16:34.689 START TEST bdev_write_zeroes 00:16:34.689 ************************************ 00:16:34.689 12:42:34 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:34.689 [2024-12-14 12:42:34.277230] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:34.689 [2024-12-14 12:42:34.277368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74907 ] 00:16:34.948 [2024-12-14 12:42:34.437658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.948 [2024-12-14 12:42:34.521809] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.207 Running I/O for 1 seconds... 00:16:36.152 77856.00 IOPS, 304.12 MiB/s 00:16:36.152 Latency(us) 00:16:36.152 [2024-12-14T12:42:35.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.152 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:36.152 nvme0n1 : 1.02 11762.26 45.95 0.00 0.00 10873.08 4814.38 21677.29 00:16:36.152 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:36.152 nvme0n2 : 1.02 11749.16 45.90 0.00 0.00 10878.99 4738.76 22584.71 00:16:36.152 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:36.152 nvme0n3 : 1.01 11728.32 45.81 0.00 0.00 10891.24 4713.55 24298.73 00:16:36.152 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:36.152 nvme1n1 : 1.02 18833.39 73.57 0.00 0.00 6776.96 3428.04 18955.03 00:16:36.152 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:36.152 nvme2n1 : 1.02 11663.74 45.56 0.00 0.00 10902.13 4814.38 23189.66 00:16:36.152 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:36.152 nvme3n1 : 1.02 11650.20 45.51 0.00 0.00 10908.52 4814.38 23088.84 00:16:36.152 [2024-12-14T12:42:35.889Z] =================================================================================================================== 00:16:36.152 [2024-12-14T12:42:35.889Z] Total : 77387.07 302.29 0.00 0.00 9890.62 3428.04 24298.73 00:16:37.096 00:16:37.096 real 0m2.447s 00:16:37.096 user 0m1.755s 00:16:37.096 sys 0m0.540s 00:16:37.096 12:42:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.096 ************************************ 00:16:37.096 END TEST bdev_write_zeroes 00:16:37.096 ************************************ 00:16:37.096 12:42:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:37.096 12:42:36 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.096 12:42:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:37.096 12:42:36 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.096 12:42:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:37.096 ************************************ 00:16:37.096 START TEST bdev_json_nonenclosed 00:16:37.096 ************************************ 00:16:37.096 12:42:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.096 [2024-12-14 12:42:36.805559] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:37.096 [2024-12-14 12:42:36.805700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74959 ] 00:16:37.358 [2024-12-14 12:42:36.964883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.358 [2024-12-14 12:42:37.084019] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.358 [2024-12-14 12:42:37.084137] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:37.358 [2024-12-14 12:42:37.084162] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:37.358 [2024-12-14 12:42:37.084173] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:37.619 00:16:37.619 real 0m0.549s 00:16:37.619 user 0m0.324s 00:16:37.619 sys 0m0.119s 00:16:37.619 12:42:37 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.619 ************************************ 00:16:37.619 END TEST bdev_json_nonenclosed 00:16:37.619 ************************************ 00:16:37.619 12:42:37 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:37.619 12:42:37 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.619 12:42:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:37.619 12:42:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.619 12:42:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:37.619 ************************************ 00:16:37.619 START TEST bdev_json_nonarray 00:16:37.619 ************************************ 00:16:37.619 12:42:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.880 [2024-12-14 12:42:37.422750] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:37.880 [2024-12-14 12:42:37.422881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74983 ] 00:16:37.880 [2024-12-14 12:42:37.587585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.139 [2024-12-14 12:42:37.709050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.139 [2024-12-14 12:42:37.709174] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:38.139 [2024-12-14 12:42:37.709194] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:38.139 [2024-12-14 12:42:37.709204] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:38.400 00:16:38.400 real 0m0.556s 00:16:38.400 user 0m0.334s 00:16:38.400 sys 0m0.115s 00:16:38.400 12:42:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:38.400 ************************************ 00:16:38.400 END TEST bdev_json_nonarray 00:16:38.400 ************************************ 00:16:38.400 12:42:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:38.400 12:42:37 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:38.400 12:42:37 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:38.400 12:42:37 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:38.400 12:42:37 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:38.400 12:42:37 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:38.400 12:42:37 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:38.400 12:42:37 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:38.400 12:42:37 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:38.400 12:42:37 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:38.400 12:42:37 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:38.400 12:42:37 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:38.400 12:42:37 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:38.973 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:51.272 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:51.272 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:51.272 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:51.272 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:51.272 00:16:51.272 real 1m2.126s 00:16:51.272 user 1m19.606s 00:16:51.272 sys 0m52.522s 00:16:51.272 12:42:50 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.272 ************************************ 00:16:51.272 END TEST blockdev_xnvme 00:16:51.272 ************************************ 00:16:51.273 12:42:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:51.273 12:42:50 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:51.273 12:42:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:51.273 12:42:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.273 12:42:50 -- common/autotest_common.sh@10 -- # set +x 00:16:51.273 ************************************ 00:16:51.273 START TEST ublk 00:16:51.273 ************************************ 00:16:51.273 12:42:50 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:51.273 * Looking for test storage... 00:16:51.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:51.273 12:42:50 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:51.273 12:42:50 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:51.273 12:42:50 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:51.273 12:42:50 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:51.273 12:42:50 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.273 12:42:50 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.273 12:42:50 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.273 12:42:50 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.273 12:42:50 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.273 12:42:50 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.273 12:42:50 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.273 12:42:50 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.273 12:42:50 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.273 12:42:50 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.273 12:42:50 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.273 12:42:50 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:51.273 12:42:50 ublk -- scripts/common.sh@345 -- # : 1 00:16:51.273 12:42:50 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.273 12:42:50 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.273 12:42:50 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:51.273 12:42:50 ublk -- scripts/common.sh@353 -- # local d=1 00:16:51.273 12:42:50 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.273 12:42:50 ublk -- scripts/common.sh@355 -- # echo 1 00:16:51.273 12:42:50 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.273 12:42:50 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:51.273 12:42:50 ublk -- scripts/common.sh@353 -- # local d=2 00:16:51.273 12:42:50 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.273 12:42:50 ublk -- scripts/common.sh@355 -- # echo 2 00:16:51.273 12:42:50 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.273 12:42:50 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.273 12:42:50 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.273 12:42:50 ublk -- scripts/common.sh@368 -- # return 0 00:16:51.273 12:42:50 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.273 12:42:50 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:51.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.273 --rc genhtml_branch_coverage=1 00:16:51.273 --rc genhtml_function_coverage=1 00:16:51.273 --rc genhtml_legend=1 00:16:51.273 --rc geninfo_all_blocks=1 00:16:51.273 --rc geninfo_unexecuted_blocks=1 00:16:51.273 00:16:51.273 ' 00:16:51.273 12:42:50 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:51.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.273 --rc genhtml_branch_coverage=1 00:16:51.273 --rc genhtml_function_coverage=1 00:16:51.273 --rc genhtml_legend=1 00:16:51.273 --rc geninfo_all_blocks=1 00:16:51.273 --rc geninfo_unexecuted_blocks=1 00:16:51.273 00:16:51.273 ' 00:16:51.273 12:42:50 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:51.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.273 --rc genhtml_branch_coverage=1 00:16:51.273 --rc genhtml_function_coverage=1 00:16:51.273 --rc genhtml_legend=1 00:16:51.273 --rc geninfo_all_blocks=1 00:16:51.273 --rc geninfo_unexecuted_blocks=1 00:16:51.273 00:16:51.273 ' 00:16:51.273 12:42:50 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:51.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.273 --rc genhtml_branch_coverage=1 00:16:51.273 --rc genhtml_function_coverage=1 00:16:51.273 --rc genhtml_legend=1 00:16:51.273 --rc geninfo_all_blocks=1 00:16:51.273 --rc geninfo_unexecuted_blocks=1 00:16:51.273 00:16:51.273 ' 00:16:51.273 12:42:50 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:51.273 12:42:50 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:51.273 12:42:50 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:51.273 12:42:50 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:51.273 12:42:50 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:51.273 12:42:50 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:51.273 12:42:50 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:51.273 12:42:50 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:51.273 12:42:50 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:51.273 12:42:50 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:51.273 12:42:50 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:51.273 12:42:50 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:51.273 12:42:50 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:51.273 12:42:50 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:51.273 12:42:50 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:51.273 12:42:50 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:51.273 12:42:50 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:51.273 12:42:50 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:51.273 12:42:50 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:51.273 12:42:50 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:51.273 12:42:50 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:51.273 12:42:50 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.273 12:42:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.273 ************************************ 00:16:51.273 START TEST test_save_ublk_config 00:16:51.273 ************************************ 00:16:51.273 12:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:51.273 12:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:51.273 12:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75282 00:16:51.273 12:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:51.273 12:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:51.273 12:42:50 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75282 00:16:51.273 12:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75282 ']' 00:16:51.273 12:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.273 12:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.273 12:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.273 12:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.273 12:42:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:51.273 [2024-12-14 12:42:50.674464] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:51.273 [2024-12-14 12:42:50.675112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75282 ] 00:16:51.273 [2024-12-14 12:42:50.835626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.273 [2024-12-14 12:42:50.980176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.215 12:42:51 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.215 12:42:51 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:52.215 12:42:51 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:52.215 12:42:51 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:52.215 12:42:51 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.215 12:42:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:52.215 [2024-12-14 12:42:51.815093] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:52.215 [2024-12-14 12:42:51.816032] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:52.215 malloc0 00:16:52.215 [2024-12-14 12:42:51.895237] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:52.215 [2024-12-14 12:42:51.895352] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:52.215 [2024-12-14 12:42:51.895364] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:52.215 [2024-12-14 12:42:51.895373] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:52.215 [2024-12-14 12:42:51.904227] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:52.215 [2024-12-14 12:42:51.904265] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:52.215 [2024-12-14 12:42:51.911099] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:52.215 [2024-12-14 12:42:51.911245] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:52.215 [2024-12-14 12:42:51.928088] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:52.215 0 00:16:52.215 12:42:51 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.215 12:42:51 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:52.215 12:42:51 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.215 12:42:51 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:52.784 12:42:52 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.784 12:42:52 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:52.784 "subsystems": [ 00:16:52.784 { 00:16:52.784 "subsystem": "fsdev", 00:16:52.784 "config": [ 00:16:52.784 { 00:16:52.784 "method": "fsdev_set_opts", 00:16:52.784 "params": { 00:16:52.784 "fsdev_io_pool_size": 65535, 00:16:52.784 "fsdev_io_cache_size": 256 00:16:52.784 } 00:16:52.784 } 00:16:52.784 ] 00:16:52.784 }, 00:16:52.784 { 00:16:52.784 "subsystem": "keyring", 00:16:52.784 "config": [] 00:16:52.784 }, 00:16:52.784 { 00:16:52.784 "subsystem": "iobuf", 00:16:52.784 "config": [ 00:16:52.784 { 00:16:52.784 "method": "iobuf_set_options", 00:16:52.784 "params": { 00:16:52.784 "small_pool_count": 8192, 00:16:52.784 "large_pool_count": 1024, 00:16:52.784 "small_bufsize": 8192, 00:16:52.784 "large_bufsize": 135168, 00:16:52.784 "enable_numa": false 00:16:52.784 } 00:16:52.784 } 00:16:52.784 ] 00:16:52.784 }, 00:16:52.784 { 00:16:52.784 "subsystem": "sock", 00:16:52.784 "config": [ 00:16:52.784 { 00:16:52.784 "method": "sock_set_default_impl", 00:16:52.784 "params": { 00:16:52.784 "impl_name": "posix" 00:16:52.784 } 00:16:52.784 }, 00:16:52.784 { 00:16:52.784 "method": "sock_impl_set_options", 00:16:52.784 "params": { 00:16:52.784 "impl_name": "ssl", 00:16:52.785 "recv_buf_size": 4096, 00:16:52.785 "send_buf_size": 4096, 00:16:52.785 "enable_recv_pipe": true, 00:16:52.785 "enable_quickack": false, 00:16:52.785 "enable_placement_id": 0, 00:16:52.785 "enable_zerocopy_send_server": true, 00:16:52.785 "enable_zerocopy_send_client": false, 00:16:52.785 "zerocopy_threshold": 0, 00:16:52.785 "tls_version": 0, 00:16:52.785 "enable_ktls": false 00:16:52.785 } 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "method": "sock_impl_set_options", 00:16:52.785 "params": { 00:16:52.785 "impl_name": "posix", 00:16:52.785 "recv_buf_size": 2097152, 00:16:52.785 "send_buf_size": 2097152, 00:16:52.785 "enable_recv_pipe": true, 00:16:52.785 "enable_quickack": false, 00:16:52.785 "enable_placement_id": 0, 00:16:52.785 "enable_zerocopy_send_server": true, 00:16:52.785 "enable_zerocopy_send_client": false, 00:16:52.785 "zerocopy_threshold": 0, 00:16:52.785 "tls_version": 0, 00:16:52.785 "enable_ktls": false 00:16:52.785 } 00:16:52.785 } 00:16:52.785 ] 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "subsystem": "vmd", 00:16:52.785 "config": [] 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "subsystem": "accel", 00:16:52.785 "config": [ 00:16:52.785 { 00:16:52.785 "method": "accel_set_options", 00:16:52.785 "params": { 00:16:52.785 "small_cache_size": 128, 00:16:52.785 "large_cache_size": 16, 00:16:52.785 "task_count": 2048, 00:16:52.785 "sequence_count": 2048, 00:16:52.785 "buf_count": 2048 00:16:52.785 } 00:16:52.785 } 00:16:52.785 ] 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "subsystem": "bdev", 00:16:52.785 "config": [ 00:16:52.785 { 00:16:52.785 "method": "bdev_set_options", 00:16:52.785 "params": { 00:16:52.785 "bdev_io_pool_size": 65535, 00:16:52.785 "bdev_io_cache_size": 256, 00:16:52.785 "bdev_auto_examine": true, 00:16:52.785 "iobuf_small_cache_size": 128, 00:16:52.785 "iobuf_large_cache_size": 16 00:16:52.785 } 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "method": "bdev_raid_set_options", 00:16:52.785 "params": { 00:16:52.785 "process_window_size_kb": 1024, 00:16:52.785 "process_max_bandwidth_mb_sec": 0 00:16:52.785 } 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "method": "bdev_iscsi_set_options", 00:16:52.785 "params": { 00:16:52.785 "timeout_sec": 30 00:16:52.785 } 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "method": "bdev_nvme_set_options", 00:16:52.785 "params": { 00:16:52.785 "action_on_timeout": "none", 00:16:52.785 "timeout_us": 0, 00:16:52.785 "timeout_admin_us": 0, 00:16:52.785 "keep_alive_timeout_ms": 10000, 00:16:52.785 "arbitration_burst": 0, 00:16:52.785 "low_priority_weight": 0, 00:16:52.785 "medium_priority_weight": 0, 00:16:52.785 "high_priority_weight": 0, 00:16:52.785 "nvme_adminq_poll_period_us": 10000, 00:16:52.785 "nvme_ioq_poll_period_us": 0, 00:16:52.785 "io_queue_requests": 0, 00:16:52.785 "delay_cmd_submit": true, 00:16:52.785 "transport_retry_count": 4, 00:16:52.785 "bdev_retry_count": 3, 00:16:52.785 "transport_ack_timeout": 0, 00:16:52.785 "ctrlr_loss_timeout_sec": 0, 00:16:52.785 "reconnect_delay_sec": 0, 00:16:52.785 "fast_io_fail_timeout_sec": 0, 00:16:52.785 "disable_auto_failback": false, 00:16:52.785 "generate_uuids": false, 00:16:52.785 "transport_tos": 0, 00:16:52.785 "nvme_error_stat": false, 00:16:52.785 "rdma_srq_size": 0, 00:16:52.785 "io_path_stat": false, 00:16:52.785 "allow_accel_sequence": false, 00:16:52.785 "rdma_max_cq_size": 0, 00:16:52.785 "rdma_cm_event_timeout_ms": 0, 00:16:52.785 "dhchap_digests": [ 00:16:52.785 "sha256", 00:16:52.785 "sha384", 00:16:52.785 "sha512" 00:16:52.785 ], 00:16:52.785 "dhchap_dhgroups": [ 00:16:52.785 "null", 00:16:52.785 "ffdhe2048", 00:16:52.785 "ffdhe3072", 00:16:52.785 "ffdhe4096", 00:16:52.785 "ffdhe6144", 00:16:52.785 "ffdhe8192" 00:16:52.785 ], 00:16:52.785 "rdma_umr_per_io": false 00:16:52.785 } 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "method": "bdev_nvme_set_hotplug", 00:16:52.785 "params": { 00:16:52.785 "period_us": 100000, 00:16:52.785 "enable": false 00:16:52.785 } 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "method": "bdev_malloc_create", 00:16:52.785 "params": { 00:16:52.785 "name": "malloc0", 00:16:52.785 "num_blocks": 8192, 00:16:52.785 "block_size": 4096, 00:16:52.785 "physical_block_size": 4096, 00:16:52.785 "uuid": "feafb2d4-0eb1-411b-bec8-b4f79a8afc0a", 00:16:52.785 "optimal_io_boundary": 0, 00:16:52.785 "md_size": 0, 00:16:52.785 "dif_type": 0, 00:16:52.785 "dif_is_head_of_md": false, 00:16:52.785 "dif_pi_format": 0 00:16:52.785 } 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "method": "bdev_wait_for_examine" 00:16:52.785 } 00:16:52.785 ] 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "subsystem": "scsi", 00:16:52.785 "config": null 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "subsystem": "scheduler", 00:16:52.785 "config": [ 00:16:52.785 { 00:16:52.785 "method": "framework_set_scheduler", 00:16:52.785 "params": { 00:16:52.785 "name": "static" 00:16:52.785 } 00:16:52.785 } 00:16:52.785 ] 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "subsystem": "vhost_scsi", 00:16:52.785 "config": [] 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "subsystem": "vhost_blk", 00:16:52.785 "config": [] 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "subsystem": "ublk", 00:16:52.785 "config": [ 00:16:52.785 { 00:16:52.785 "method": "ublk_create_target", 00:16:52.785 "params": { 00:16:52.785 "cpumask": "1" 00:16:52.785 } 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "method": "ublk_start_disk", 00:16:52.785 "params": { 00:16:52.785 "bdev_name": "malloc0", 00:16:52.785 "ublk_id": 0, 00:16:52.785 "num_queues": 1, 00:16:52.785 "queue_depth": 128 00:16:52.785 } 00:16:52.785 } 00:16:52.785 ] 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "subsystem": "nbd", 00:16:52.785 "config": [] 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "subsystem": "nvmf", 00:16:52.785 "config": [ 00:16:52.785 { 00:16:52.785 "method": "nvmf_set_config", 00:16:52.785 "params": { 00:16:52.785 "discovery_filter": "match_any", 00:16:52.785 "admin_cmd_passthru": { 00:16:52.785 "identify_ctrlr": false 00:16:52.785 }, 00:16:52.785 "dhchap_digests": [ 00:16:52.785 "sha256", 00:16:52.785 "sha384", 00:16:52.785 "sha512" 00:16:52.785 ], 00:16:52.785 "dhchap_dhgroups": [ 00:16:52.785 "null", 00:16:52.785 "ffdhe2048", 00:16:52.785 "ffdhe3072", 00:16:52.785 "ffdhe4096", 00:16:52.785 "ffdhe6144", 00:16:52.785 "ffdhe8192" 00:16:52.785 ] 00:16:52.785 } 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "method": "nvmf_set_max_subsystems", 00:16:52.785 "params": { 00:16:52.785 "max_subsystems": 1024 00:16:52.785 } 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "method": "nvmf_set_crdt", 00:16:52.785 "params": { 00:16:52.785 "crdt1": 0, 00:16:52.785 "crdt2": 0, 00:16:52.785 "crdt3": 0 00:16:52.785 } 00:16:52.785 } 00:16:52.785 ] 00:16:52.785 }, 00:16:52.785 { 00:16:52.785 "subsystem": "iscsi", 00:16:52.785 "config": [ 00:16:52.785 { 00:16:52.785 "method": "iscsi_set_options", 00:16:52.785 "params": { 00:16:52.785 "node_base": "iqn.2016-06.io.spdk", 00:16:52.785 "max_sessions": 128, 00:16:52.788 "max_connections_per_session": 2, 00:16:52.788 "max_queue_depth": 64, 00:16:52.788 "default_time2wait": 2, 00:16:52.788 "default_time2retain": 20, 00:16:52.788 "first_burst_length": 8192, 00:16:52.788 "immediate_data": true, 00:16:52.788 "allow_duplicated_isid": false, 00:16:52.788 "error_recovery_level": 0, 00:16:52.788 "nop_timeout": 60, 00:16:52.788 "nop_in_interval": 30, 00:16:52.788 "disable_chap": false, 00:16:52.788 "require_chap": false, 00:16:52.788 "mutual_chap": false, 00:16:52.788 "chap_group": 0, 00:16:52.788 "max_large_datain_per_connection": 64, 00:16:52.788 "max_r2t_per_connection": 4, 00:16:52.788 "pdu_pool_size": 36864, 00:16:52.788 "immediate_data_pool_size": 16384, 00:16:52.788 "data_out_pool_size": 2048 00:16:52.788 } 00:16:52.788 } 00:16:52.788 ] 00:16:52.788 } 00:16:52.788 ] 00:16:52.788 }' 00:16:52.788 12:42:52 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75282 00:16:52.788 12:42:52 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75282 ']' 00:16:52.788 12:42:52 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75282 00:16:52.788 12:42:52 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:52.788 12:42:52 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.788 12:42:52 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75282 00:16:52.788 12:42:52 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.788 killing process with pid 75282 00:16:52.788 12:42:52 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.788 12:42:52 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75282' 00:16:52.788 12:42:52 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75282 00:16:52.788 12:42:52 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75282 00:16:53.729 [2024-12-14 12:42:53.418355] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:53.729 [2024-12-14 12:42:53.459130] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:53.729 [2024-12-14 12:42:53.459278] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:53.988 [2024-12-14 12:42:53.467120] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:53.988 [2024-12-14 12:42:53.467196] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:53.988 [2024-12-14 12:42:53.467213] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:53.988 [2024-12-14 12:42:53.467246] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:53.988 [2024-12-14 12:42:53.467431] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:55.366 12:42:54 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75344 00:16:55.366 12:42:54 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75344 00:16:55.366 12:42:54 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75344 ']' 00:16:55.366 12:42:54 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.366 12:42:54 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.366 12:42:54 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.366 12:42:54 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:55.366 12:42:54 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.366 12:42:54 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:55.366 12:42:54 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:55.366 "subsystems": [ 00:16:55.366 { 00:16:55.366 "subsystem": "fsdev", 00:16:55.366 "config": [ 00:16:55.366 { 00:16:55.366 "method": "fsdev_set_opts", 00:16:55.366 "params": { 00:16:55.366 "fsdev_io_pool_size": 65535, 00:16:55.366 "fsdev_io_cache_size": 256 00:16:55.366 } 00:16:55.366 } 00:16:55.366 ] 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "subsystem": "keyring", 00:16:55.366 "config": [] 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "subsystem": "iobuf", 00:16:55.366 "config": [ 00:16:55.366 { 00:16:55.366 "method": "iobuf_set_options", 00:16:55.366 "params": { 00:16:55.366 "small_pool_count": 8192, 00:16:55.366 "large_pool_count": 1024, 00:16:55.366 "small_bufsize": 8192, 00:16:55.366 "large_bufsize": 135168, 00:16:55.366 "enable_numa": false 00:16:55.366 } 00:16:55.366 } 00:16:55.366 ] 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "subsystem": "sock", 00:16:55.366 "config": [ 00:16:55.366 { 00:16:55.366 "method": "sock_set_default_impl", 00:16:55.366 "params": { 00:16:55.366 "impl_name": "posix" 00:16:55.366 } 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "method": "sock_impl_set_options", 00:16:55.366 "params": { 00:16:55.366 "impl_name": "ssl", 00:16:55.366 "recv_buf_size": 4096, 00:16:55.366 "send_buf_size": 4096, 00:16:55.366 "enable_recv_pipe": true, 00:16:55.366 "enable_quickack": false, 00:16:55.366 "enable_placement_id": 0, 00:16:55.366 "enable_zerocopy_send_server": true, 00:16:55.366 "enable_zerocopy_send_client": false, 00:16:55.366 "zerocopy_threshold": 0, 00:16:55.366 "tls_version": 0, 00:16:55.366 "enable_ktls": false 00:16:55.366 } 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "method": "sock_impl_set_options", 00:16:55.366 "params": { 00:16:55.366 "impl_name": "posix", 00:16:55.366 "recv_buf_size": 2097152, 00:16:55.366 "send_buf_size": 2097152, 00:16:55.366 "enable_recv_pipe": true, 00:16:55.366 "enable_quickack": false, 00:16:55.366 "enable_placement_id": 0, 00:16:55.366 "enable_zerocopy_send_server": true, 00:16:55.366 "enable_zerocopy_send_client": false, 00:16:55.366 "zerocopy_threshold": 0, 00:16:55.366 "tls_version": 0, 00:16:55.366 "enable_ktls": false 00:16:55.366 } 00:16:55.366 } 00:16:55.366 ] 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "subsystem": "vmd", 00:16:55.366 "config": [] 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "subsystem": "accel", 00:16:55.366 "config": [ 00:16:55.366 { 00:16:55.366 "method": "accel_set_options", 00:16:55.366 "params": { 00:16:55.366 "small_cache_size": 128, 00:16:55.366 "large_cache_size": 16, 00:16:55.366 "task_count": 2048, 00:16:55.366 "sequence_count": 2048, 00:16:55.366 "buf_count": 2048 00:16:55.366 } 00:16:55.366 } 00:16:55.366 ] 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "subsystem": "bdev", 00:16:55.366 "config": [ 00:16:55.366 { 00:16:55.366 "method": "bdev_set_options", 00:16:55.366 "params": { 00:16:55.366 "bdev_io_pool_size": 65535, 00:16:55.366 "bdev_io_cache_size": 256, 00:16:55.366 "bdev_auto_examine": true, 00:16:55.366 "iobuf_small_cache_size": 128, 00:16:55.366 "iobuf_large_cache_size": 16 00:16:55.366 } 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "method": "bdev_raid_set_options", 00:16:55.366 "params": { 00:16:55.366 "process_window_size_kb": 1024, 00:16:55.366 "process_max_bandwidth_mb_sec": 0 00:16:55.366 } 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "method": "bdev_iscsi_set_options", 00:16:55.366 "params": { 00:16:55.366 "timeout_sec": 30 00:16:55.366 } 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "method": "bdev_nvme_set_options", 00:16:55.366 "params": { 00:16:55.366 "action_on_timeout": "none", 00:16:55.366 "timeout_us": 0, 00:16:55.366 "timeout_admin_us": 0, 00:16:55.366 "keep_alive_timeout_ms": 10000, 00:16:55.366 "arbitration_burst": 0, 00:16:55.366 "low_priority_weight": 0, 00:16:55.366 "medium_priority_weight": 0, 00:16:55.366 "high_priority_weight": 0, 00:16:55.366 "nvme_adminq_poll_period_us": 10000, 00:16:55.366 "nvme_ioq_poll_period_us": 0, 00:16:55.366 "io_queue_requests": 0, 00:16:55.366 "delay_cmd_submit": true, 00:16:55.366 "transport_retry_count": 4, 00:16:55.366 "bdev_retry_count": 3, 00:16:55.366 "transport_ack_timeout": 0, 00:16:55.366 "ctrlr_loss_timeout_sec": 0, 00:16:55.366 "reconnect_delay_sec": 0, 00:16:55.366 "fast_io_fail_timeout_sec": 0, 00:16:55.366 "disable_auto_failback": false, 00:16:55.366 "generate_uuids": false, 00:16:55.366 "transport_tos": 0, 00:16:55.366 "nvme_error_stat": false, 00:16:55.366 "rdma_srq_size": 0, 00:16:55.366 "io_path_stat": false, 00:16:55.366 "allow_accel_sequence": false, 00:16:55.366 "rdma_max_cq_size": 0, 00:16:55.366 "rdma_cm_event_timeout_ms": 0, 00:16:55.366 "dhchap_digests": [ 00:16:55.366 "sha256", 00:16:55.366 "sha384", 00:16:55.366 "sha512" 00:16:55.366 ], 00:16:55.366 "dhchap_dhgroups": [ 00:16:55.366 "null", 00:16:55.366 "ffdhe2048", 00:16:55.366 "ffdhe3072", 00:16:55.366 "ffdhe4096", 00:16:55.366 "ffdhe6144", 00:16:55.366 "ffdhe8192" 00:16:55.366 ], 00:16:55.366 "rdma_umr_per_io": false 00:16:55.366 } 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "method": "bdev_nvme_set_hotplug", 00:16:55.366 "params": { 00:16:55.366 "period_us": 100000, 00:16:55.366 "enable": false 00:16:55.366 } 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "method": "bdev_malloc_create", 00:16:55.366 "params": { 00:16:55.366 "name": "malloc0", 00:16:55.366 "num_blocks": 8192, 00:16:55.366 "block_size": 4096, 00:16:55.366 "physical_block_size": 4096, 00:16:55.366 "uuid": "feafb2d4-0eb1-411b-bec8-b4f79a8afc0a", 00:16:55.366 "optimal_io_boundary": 0, 00:16:55.366 "md_size": 0, 00:16:55.366 "dif_type": 0, 00:16:55.366 "dif_is_head_of_md": false, 00:16:55.366 "dif_pi_format": 0 00:16:55.366 } 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "method": "bdev_wait_for_examine" 00:16:55.366 } 00:16:55.366 ] 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "subsystem": "scsi", 00:16:55.366 "config": null 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "subsystem": "scheduler", 00:16:55.366 "config": [ 00:16:55.366 { 00:16:55.366 "method": "framework_set_scheduler", 00:16:55.366 "params": { 00:16:55.366 "name": "static" 00:16:55.366 } 00:16:55.366 } 00:16:55.366 ] 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "subsystem": "vhost_scsi", 00:16:55.366 "config": [] 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "subsystem": "vhost_blk", 00:16:55.366 "config": [] 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "subsystem": "ublk", 00:16:55.366 "config": [ 00:16:55.366 { 00:16:55.366 "method": "ublk_create_target", 00:16:55.366 "params": { 00:16:55.366 "cpumask": "1" 00:16:55.366 } 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "method": "ublk_start_disk", 00:16:55.366 "params": { 00:16:55.366 "bdev_name": "malloc0", 00:16:55.366 "ublk_id": 0, 00:16:55.366 "num_queues": 1, 00:16:55.366 "queue_depth": 128 00:16:55.366 } 00:16:55.366 } 00:16:55.366 ] 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "subsystem": "nbd", 00:16:55.366 "config": [] 00:16:55.366 }, 00:16:55.366 { 00:16:55.366 "subsystem": "nvmf", 00:16:55.366 "config": [ 00:16:55.366 { 00:16:55.366 "method": "nvmf_set_config", 00:16:55.366 "params": { 00:16:55.367 "discovery_filter": "match_any", 00:16:55.367 "admin_cmd_passthru": { 00:16:55.367 "identify_ctrlr": false 00:16:55.367 }, 00:16:55.367 "dhchap_digests": [ 00:16:55.367 "sha256", 00:16:55.367 "sha384", 00:16:55.367 "sha512" 00:16:55.367 ], 00:16:55.367 "dhchap_dhgroups": [ 00:16:55.367 "null", 00:16:55.367 "ffdhe2048", 00:16:55.367 "ffdhe3072", 00:16:55.367 "ffdhe4096", 00:16:55.367 "ffdhe6144", 00:16:55.367 "ffdhe8192" 00:16:55.367 ] 00:16:55.367 } 00:16:55.367 }, 00:16:55.367 { 00:16:55.367 "method": "nvmf_set_max_subsystems", 00:16:55.367 "params": { 00:16:55.367 "max_subsystems": 1024 00:16:55.367 } 00:16:55.367 }, 00:16:55.367 { 00:16:55.367 "method": "nvmf_set_crdt", 00:16:55.367 "params": { 00:16:55.367 "crdt1": 0, 00:16:55.367 "crdt2": 0, 00:16:55.367 "crdt3": 0 00:16:55.367 } 00:16:55.367 } 00:16:55.367 ] 00:16:55.367 }, 00:16:55.367 { 00:16:55.367 "subsystem": "iscsi", 00:16:55.367 "config": [ 00:16:55.367 { 00:16:55.367 "method": "iscsi_set_options", 00:16:55.367 "params": { 00:16:55.367 "node_base": "iqn.2016-06.io.spdk", 00:16:55.367 "max_sessions": 128, 00:16:55.367 "max_connections_per_session": 2, 00:16:55.367 "max_queue_depth": 64, 00:16:55.367 "default_time2wait": 2, 00:16:55.367 "default_time2retain": 20, 00:16:55.367 "first_burst_length": 8192, 00:16:55.367 "immediate_data": true, 00:16:55.367 "allow_duplicated_isid": false, 00:16:55.367 "error_recovery_level": 0, 00:16:55.367 "nop_timeout": 60, 00:16:55.367 "nop_in_interval": 30, 00:16:55.367 "disable_chap": false, 00:16:55.367 "require_chap": false, 00:16:55.367 "mutual_chap": false, 00:16:55.367 "chap_group": 0, 00:16:55.367 "max_large_datain_per_connection": 64, 00:16:55.367 "max_r2t_per_connection": 4, 00:16:55.367 "pdu_pool_size": 36864, 00:16:55.367 "immediate_data_pool_size": 16384, 00:16:55.367 "data_out_pool_size": 2048 00:16:55.367 } 00:16:55.367 } 00:16:55.367 ] 00:16:55.367 } 00:16:55.367 ] 00:16:55.367 }' 00:16:55.367 [2024-12-14 12:42:54.974615] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:55.367 [2024-12-14 12:42:54.974761] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75344 ] 00:16:55.627 [2024-12-14 12:42:55.140171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.627 [2024-12-14 12:42:55.264589] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.575 [2024-12-14 12:42:56.130080] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:56.575 [2024-12-14 12:42:56.131039] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:56.575 [2024-12-14 12:42:56.138223] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:56.575 [2024-12-14 12:42:56.138335] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:56.575 [2024-12-14 12:42:56.138347] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:56.575 [2024-12-14 12:42:56.138355] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:56.575 [2024-12-14 12:42:56.147177] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:56.575 [2024-12-14 12:42:56.147210] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:56.575 [2024-12-14 12:42:56.154098] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:56.575 [2024-12-14 12:42:56.154231] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:56.575 [2024-12-14 12:42:56.171106] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75344 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75344 ']' 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75344 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75344 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.575 killing process with pid 75344 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75344' 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75344 00:16:56.575 12:42:56 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75344 00:16:57.954 [2024-12-14 12:42:57.423626] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:57.954 [2024-12-14 12:42:57.466096] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:57.954 [2024-12-14 12:42:57.466189] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:57.954 [2024-12-14 12:42:57.474078] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:57.954 [2024-12-14 12:42:57.474119] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:57.954 [2024-12-14 12:42:57.474125] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:57.954 [2024-12-14 12:42:57.474144] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:57.954 [2024-12-14 12:42:57.474251] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:59.439 12:42:58 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:59.439 ************************************ 00:16:59.439 END TEST test_save_ublk_config 00:16:59.439 ************************************ 00:16:59.439 00:16:59.439 real 0m8.138s 00:16:59.439 user 0m5.575s 00:16:59.439 sys 0m3.225s 00:16:59.439 12:42:58 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.439 12:42:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:59.439 12:42:58 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75417 00:16:59.439 12:42:58 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:59.439 12:42:58 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75417 00:16:59.439 12:42:58 ublk -- common/autotest_common.sh@835 -- # '[' -z 75417 ']' 00:16:59.439 12:42:58 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.439 12:42:58 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:59.439 12:42:58 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.439 12:42:58 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.439 12:42:58 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.439 12:42:58 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.439 [2024-12-14 12:42:58.840510] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:59.439 [2024-12-14 12:42:58.840626] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75417 ] 00:16:59.439 [2024-12-14 12:42:58.994927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:59.439 [2024-12-14 12:42:59.073444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.439 [2024-12-14 12:42:59.073600] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.007 12:42:59 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.007 12:42:59 ublk -- common/autotest_common.sh@868 -- # return 0 00:17:00.007 12:42:59 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:17:00.007 12:42:59 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:00.007 12:42:59 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.007 12:42:59 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.007 ************************************ 00:17:00.007 START TEST test_create_ublk 00:17:00.007 ************************************ 00:17:00.008 12:42:59 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:17:00.008 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:17:00.008 12:42:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.008 12:42:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.008 [2024-12-14 12:42:59.600077] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:00.008 [2024-12-14 12:42:59.601548] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:00.008 12:42:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.008 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:17:00.008 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:17:00.008 12:42:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.008 12:42:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.267 12:42:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:00.267 12:42:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.267 12:42:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.267 [2024-12-14 12:42:59.760169] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:00.267 [2024-12-14 12:42:59.760464] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:00.267 [2024-12-14 12:42:59.760478] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:00.267 [2024-12-14 12:42:59.760484] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:00.267 [2024-12-14 12:42:59.769257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:00.267 [2024-12-14 12:42:59.769274] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:00.267 [2024-12-14 12:42:59.776082] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:00.267 [2024-12-14 12:42:59.776569] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:00.267 [2024-12-14 12:42:59.791088] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:00.267 12:42:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:17:00.267 12:42:59 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.267 12:42:59 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.267 12:42:59 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:17:00.267 { 00:17:00.267 "ublk_device": "/dev/ublkb0", 00:17:00.267 "id": 0, 00:17:00.267 "queue_depth": 512, 00:17:00.267 "num_queues": 4, 00:17:00.267 "bdev_name": "Malloc0" 00:17:00.267 } 00:17:00.267 ]' 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:00.267 12:42:59 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:00.267 12:42:59 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:00.267 12:42:59 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:00.267 12:42:59 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:00.268 12:42:59 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:00.268 12:42:59 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:00.268 12:42:59 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:00.268 12:42:59 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:00.268 12:42:59 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:00.268 12:42:59 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:00.268 12:42:59 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:00.268 12:42:59 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:00.527 fio: verification read phase will never start because write phase uses all of runtime 00:17:00.527 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:00.527 fio-3.35 00:17:00.528 Starting 1 process 00:17:10.513 00:17:10.513 fio_test: (groupid=0, jobs=1): err= 0: pid=75461: Sat Dec 14 12:43:10 2024 00:17:10.513 write: IOPS=14.6k, BW=57.2MiB/s (60.0MB/s)(572MiB/10001msec); 0 zone resets 00:17:10.513 clat (usec): min=35, max=8000, avg=67.61, stdev=124.73 00:17:10.513 lat (usec): min=35, max=8001, avg=68.01, stdev=124.74 00:17:10.513 clat percentiles (usec): 00:17:10.513 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 55], 20.00th=[ 59], 00:17:10.513 | 30.00th=[ 61], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 64], 00:17:10.513 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 71], 95.00th=[ 74], 00:17:10.513 | 99.00th=[ 83], 99.50th=[ 91], 99.90th=[ 2606], 99.95th=[ 3687], 00:17:10.513 | 99.99th=[ 4015] 00:17:10.513 bw ( KiB/s): min=32808, max=66336, per=100.00%, avg=58611.37, stdev=6594.09, samples=19 00:17:10.513 iops : min= 8202, max=16584, avg=14652.84, stdev=1648.52, samples=19 00:17:10.513 lat (usec) : 50=5.86%, 100=93.77%, 250=0.17%, 500=0.01%, 750=0.01% 00:17:10.513 lat (usec) : 1000=0.01% 00:17:10.513 lat (msec) : 2=0.05%, 4=0.11%, 10=0.02% 00:17:10.513 cpu : usr=2.31%, sys=10.17%, ctx=146455, majf=0, minf=799 00:17:10.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.513 issued rwts: total=0,146455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.513 00:17:10.513 Run status group 0 (all jobs): 00:17:10.513 WRITE: bw=57.2MiB/s (60.0MB/s), 57.2MiB/s-57.2MiB/s (60.0MB/s-60.0MB/s), io=572MiB (600MB), run=10001-10001msec 00:17:10.513 00:17:10.513 Disk stats (read/write): 00:17:10.513 ublkb0: ios=0/144980, merge=0/0, ticks=0/8736, in_queue=8737, util=99.09% 00:17:10.513 12:43:10 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:10.513 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.513 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.513 [2024-12-14 12:43:10.208523] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:10.513 [2024-12-14 12:43:10.246597] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:10.513 [2024-12-14 12:43:10.247580] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:10.771 [2024-12-14 12:43:10.253090] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:10.771 [2024-12-14 12:43:10.253343] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:10.771 [2024-12-14 12:43:10.253356] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.771 12:43:10 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.771 [2024-12-14 12:43:10.277145] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:10.771 request: 00:17:10.771 { 00:17:10.771 "ublk_id": 0, 00:17:10.771 "method": "ublk_stop_disk", 00:17:10.771 "req_id": 1 00:17:10.771 } 00:17:10.771 Got JSON-RPC error response 00:17:10.771 response: 00:17:10.771 { 00:17:10.771 "code": -19, 00:17:10.771 "message": "No such device" 00:17:10.771 } 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.771 12:43:10 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.771 [2024-12-14 12:43:10.293133] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:10.771 [2024-12-14 12:43:10.300098] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:10.771 [2024-12-14 12:43:10.300128] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.771 12:43:10 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.771 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.029 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.029 12:43:10 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:11.029 12:43:10 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:11.029 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.029 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.029 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.029 12:43:10 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:11.029 12:43:10 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:11.029 12:43:10 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:11.029 12:43:10 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:11.029 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.029 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.029 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.029 12:43:10 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:11.029 12:43:10 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:11.029 ************************************ 00:17:11.029 END TEST test_create_ublk 00:17:11.029 ************************************ 00:17:11.029 12:43:10 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:11.029 00:17:11.029 real 0m11.161s 00:17:11.029 user 0m0.529s 00:17:11.029 sys 0m1.104s 00:17:11.029 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:11.029 12:43:10 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.287 12:43:10 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:11.287 12:43:10 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:11.287 12:43:10 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.287 12:43:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.287 ************************************ 00:17:11.287 START TEST test_create_multi_ublk 00:17:11.287 ************************************ 00:17:11.287 12:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:11.287 12:43:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:11.287 12:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.287 12:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.287 [2024-12-14 12:43:10.801070] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:11.287 [2024-12-14 12:43:10.802581] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:11.287 12:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.287 12:43:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:11.287 12:43:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:11.287 12:43:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:11.287 12:43:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:11.287 12:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.287 12:43:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.287 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.287 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:11.287 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:11.287 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.287 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.545 [2024-12-14 12:43:11.024179] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:11.545 [2024-12-14 12:43:11.024483] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:11.545 [2024-12-14 12:43:11.024495] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:11.546 [2024-12-14 12:43:11.024503] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:11.546 [2024-12-14 12:43:11.036119] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:11.546 [2024-12-14 12:43:11.036138] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:11.546 [2024-12-14 12:43:11.048074] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:11.546 [2024-12-14 12:43:11.048563] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:11.546 [2024-12-14 12:43:11.080079] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:11.546 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.546 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:11.546 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:11.546 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:11.546 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.546 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.803 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.803 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:11.803 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:11.804 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.804 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.804 [2024-12-14 12:43:11.319180] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:11.804 [2024-12-14 12:43:11.319474] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:11.804 [2024-12-14 12:43:11.319487] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:11.804 [2024-12-14 12:43:11.319492] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:11.804 [2024-12-14 12:43:11.331088] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:11.804 [2024-12-14 12:43:11.331104] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:11.804 [2024-12-14 12:43:11.343086] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:11.804 [2024-12-14 12:43:11.343579] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:11.804 [2024-12-14 12:43:11.368095] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:11.804 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.804 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:11.804 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:11.804 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:11.804 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.804 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.062 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.062 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:12.062 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:12.062 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.062 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.062 [2024-12-14 12:43:11.607171] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:12.062 [2024-12-14 12:43:11.607469] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:12.062 [2024-12-14 12:43:11.607480] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:12.062 [2024-12-14 12:43:11.607487] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:12.062 [2024-12-14 12:43:11.617689] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:12.062 [2024-12-14 12:43:11.617708] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:12.062 [2024-12-14 12:43:11.630082] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:12.062 [2024-12-14 12:43:11.630573] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:12.062 [2024-12-14 12:43:11.655096] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:12.062 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.062 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:12.062 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.062 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:12.062 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.062 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.320 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.320 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:12.320 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:12.320 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.320 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.320 [2024-12-14 12:43:11.894188] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:12.320 [2024-12-14 12:43:11.894478] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:12.320 [2024-12-14 12:43:11.894490] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:12.320 [2024-12-14 12:43:11.894495] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:12.320 [2024-12-14 12:43:11.906100] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:12.320 [2024-12-14 12:43:11.906118] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:12.320 [2024-12-14 12:43:11.918100] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:12.320 [2024-12-14 12:43:11.918584] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:12.320 [2024-12-14 12:43:11.931106] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:12.320 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.320 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:12.320 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:12.320 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.320 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.320 12:43:11 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.320 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:12.320 { 00:17:12.320 "ublk_device": "/dev/ublkb0", 00:17:12.320 "id": 0, 00:17:12.320 "queue_depth": 512, 00:17:12.320 "num_queues": 4, 00:17:12.321 "bdev_name": "Malloc0" 00:17:12.321 }, 00:17:12.321 { 00:17:12.321 "ublk_device": "/dev/ublkb1", 00:17:12.321 "id": 1, 00:17:12.321 "queue_depth": 512, 00:17:12.321 "num_queues": 4, 00:17:12.321 "bdev_name": "Malloc1" 00:17:12.321 }, 00:17:12.321 { 00:17:12.321 "ublk_device": "/dev/ublkb2", 00:17:12.321 "id": 2, 00:17:12.321 "queue_depth": 512, 00:17:12.321 "num_queues": 4, 00:17:12.321 "bdev_name": "Malloc2" 00:17:12.321 }, 00:17:12.321 { 00:17:12.321 "ublk_device": "/dev/ublkb3", 00:17:12.321 "id": 3, 00:17:12.321 "queue_depth": 512, 00:17:12.321 "num_queues": 4, 00:17:12.321 "bdev_name": "Malloc3" 00:17:12.321 } 00:17:12.321 ]' 00:17:12.321 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:12.321 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.321 12:43:11 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:12.321 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:12.321 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:12.321 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:12.321 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.579 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:12.837 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:13.095 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:13.095 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:13.095 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:13.095 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:13.095 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:13.095 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.095 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:13.095 12:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.095 12:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.095 [2024-12-14 12:43:12.636155] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.095 [2024-12-14 12:43:12.683085] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.095 [2024-12-14 12:43:12.684016] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.095 [2024-12-14 12:43:12.691082] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.095 [2024-12-14 12:43:12.691327] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:13.095 [2024-12-14 12:43:12.691340] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.096 [2024-12-14 12:43:12.707157] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.096 [2024-12-14 12:43:12.748591] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.096 [2024-12-14 12:43:12.749717] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.096 [2024-12-14 12:43:12.755083] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.096 [2024-12-14 12:43:12.755325] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:13.096 [2024-12-14 12:43:12.755338] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.096 [2024-12-14 12:43:12.771145] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.096 [2024-12-14 12:43:12.811115] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.096 [2024-12-14 12:43:12.811874] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.096 [2024-12-14 12:43:12.819086] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.096 [2024-12-14 12:43:12.819320] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:13.096 [2024-12-14 12:43:12.819331] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.096 12:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.356 [2024-12-14 12:43:12.835145] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:13.356 [2024-12-14 12:43:12.879097] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:13.356 [2024-12-14 12:43:12.879674] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:13.356 [2024-12-14 12:43:12.894073] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:13.356 [2024-12-14 12:43:12.894310] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:13.356 [2024-12-14 12:43:12.894322] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:13.356 12:43:12 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.356 12:43:12 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:13.356 [2024-12-14 12:43:13.081118] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:13.356 [2024-12-14 12:43:13.089069] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:13.356 [2024-12-14 12:43:13.089098] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:13.616 12:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:13.616 12:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.616 12:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:13.616 12:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.616 12:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.880 12:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.880 12:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.880 12:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:13.880 12:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.880 12:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.141 12:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.141 12:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:14.141 12:43:13 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:14.141 12:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.141 12:43:13 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.402 12:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.402 12:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:14.402 12:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:14.402 12:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.402 12:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:14.664 00:17:14.664 real 0m3.499s 00:17:14.664 user 0m0.815s 00:17:14.664 sys 0m0.153s 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.664 12:43:14 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.664 ************************************ 00:17:14.664 END TEST test_create_multi_ublk 00:17:14.664 ************************************ 00:17:14.664 12:43:14 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:14.664 12:43:14 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:14.664 12:43:14 ublk -- ublk/ublk.sh@130 -- # killprocess 75417 00:17:14.664 12:43:14 ublk -- common/autotest_common.sh@954 -- # '[' -z 75417 ']' 00:17:14.664 12:43:14 ublk -- common/autotest_common.sh@958 -- # kill -0 75417 00:17:14.664 12:43:14 ublk -- common/autotest_common.sh@959 -- # uname 00:17:14.664 12:43:14 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.664 12:43:14 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75417 00:17:14.664 12:43:14 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.664 12:43:14 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.664 12:43:14 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75417' 00:17:14.664 killing process with pid 75417 00:17:14.664 12:43:14 ublk -- common/autotest_common.sh@973 -- # kill 75417 00:17:14.664 12:43:14 ublk -- common/autotest_common.sh@978 -- # wait 75417 00:17:15.234 [2024-12-14 12:43:14.882845] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:15.234 [2024-12-14 12:43:14.882890] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:15.807 00:17:15.807 real 0m25.138s 00:17:15.807 user 0m35.638s 00:17:15.807 sys 0m9.139s 00:17:15.807 12:43:15 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.807 ************************************ 00:17:15.807 END TEST ublk 00:17:15.807 ************************************ 00:17:15.807 12:43:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:16.067 12:43:15 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:16.067 12:43:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:16.067 12:43:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.067 12:43:15 -- common/autotest_common.sh@10 -- # set +x 00:17:16.067 ************************************ 00:17:16.067 START TEST ublk_recovery 00:17:16.067 ************************************ 00:17:16.067 12:43:15 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:16.067 * Looking for test storage... 00:17:16.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:16.067 12:43:15 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:16.067 12:43:15 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:16.067 12:43:15 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:17:16.067 12:43:15 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.067 12:43:15 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:16.067 12:43:15 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.067 12:43:15 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:16.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.067 --rc genhtml_branch_coverage=1 00:17:16.067 --rc genhtml_function_coverage=1 00:17:16.067 --rc genhtml_legend=1 00:17:16.067 --rc geninfo_all_blocks=1 00:17:16.067 --rc geninfo_unexecuted_blocks=1 00:17:16.067 00:17:16.067 ' 00:17:16.067 12:43:15 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:16.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.067 --rc genhtml_branch_coverage=1 00:17:16.067 --rc genhtml_function_coverage=1 00:17:16.067 --rc genhtml_legend=1 00:17:16.067 --rc geninfo_all_blocks=1 00:17:16.067 --rc geninfo_unexecuted_blocks=1 00:17:16.067 00:17:16.067 ' 00:17:16.067 12:43:15 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:16.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.067 --rc genhtml_branch_coverage=1 00:17:16.067 --rc genhtml_function_coverage=1 00:17:16.067 --rc genhtml_legend=1 00:17:16.067 --rc geninfo_all_blocks=1 00:17:16.067 --rc geninfo_unexecuted_blocks=1 00:17:16.067 00:17:16.067 ' 00:17:16.067 12:43:15 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:16.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.067 --rc genhtml_branch_coverage=1 00:17:16.067 --rc genhtml_function_coverage=1 00:17:16.067 --rc genhtml_legend=1 00:17:16.067 --rc geninfo_all_blocks=1 00:17:16.067 --rc geninfo_unexecuted_blocks=1 00:17:16.067 00:17:16.067 ' 00:17:16.067 12:43:15 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:16.067 12:43:15 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:16.067 12:43:15 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:16.067 12:43:15 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:16.067 12:43:15 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:16.067 12:43:15 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:16.067 12:43:15 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:16.067 12:43:15 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:16.067 12:43:15 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:16.068 12:43:15 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:16.068 12:43:15 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75815 00:17:16.068 12:43:15 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.068 12:43:15 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75815 00:17:16.068 12:43:15 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75815 ']' 00:17:16.068 12:43:15 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:16.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.068 12:43:15 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.068 12:43:15 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.068 12:43:15 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.068 12:43:15 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.068 12:43:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.328 [2024-12-14 12:43:15.817873] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:16.328 [2024-12-14 12:43:15.817994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75815 ] 00:17:16.328 [2024-12-14 12:43:15.976018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:16.588 [2024-12-14 12:43:16.096011] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.588 [2024-12-14 12:43:16.096105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.160 12:43:16 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.160 12:43:16 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:17.160 12:43:16 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:17.160 12:43:16 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.160 12:43:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.160 [2024-12-14 12:43:16.700078] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:17.160 [2024-12-14 12:43:16.701982] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:17.160 12:43:16 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.160 12:43:16 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:17.160 12:43:16 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.160 12:43:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.160 malloc0 00:17:17.160 12:43:16 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.160 12:43:16 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:17.160 12:43:16 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.160 12:43:16 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.160 [2024-12-14 12:43:16.804199] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:17.160 [2024-12-14 12:43:16.804295] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:17.160 [2024-12-14 12:43:16.804306] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:17.160 [2024-12-14 12:43:16.804313] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:17.160 [2024-12-14 12:43:16.813156] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:17.160 [2024-12-14 12:43:16.813177] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:17.160 [2024-12-14 12:43:16.820090] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:17.160 [2024-12-14 12:43:16.820243] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:17.160 [2024-12-14 12:43:16.837088] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:17.160 1 00:17:17.160 12:43:16 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.160 12:43:16 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:18.535 12:43:17 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75850 00:17:18.535 12:43:17 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:18.535 12:43:17 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:18.535 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:18.535 fio-3.35 00:17:18.535 Starting 1 process 00:17:23.802 12:43:22 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75815 00:17:23.802 12:43:22 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:29.091 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75815 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:29.091 12:43:27 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75955 00:17:29.091 12:43:27 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:29.091 12:43:27 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75955 00:17:29.091 12:43:27 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75955 ']' 00:17:29.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.091 12:43:27 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.091 12:43:27 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.091 12:43:27 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.091 12:43:27 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.091 12:43:27 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.091 12:43:27 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:29.091 [2024-12-14 12:43:27.938093] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:29.091 [2024-12-14 12:43:27.938216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75955 ] 00:17:29.091 [2024-12-14 12:43:28.095705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:29.091 [2024-12-14 12:43:28.172008] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.091 [2024-12-14 12:43:28.172031] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.091 12:43:28 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.091 12:43:28 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:29.091 12:43:28 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:29.091 12:43:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.091 12:43:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.091 [2024-12-14 12:43:28.769074] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:29.091 [2024-12-14 12:43:28.770598] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:29.091 12:43:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.091 12:43:28 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:29.091 12:43:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.091 12:43:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.350 malloc0 00:17:29.350 12:43:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.350 12:43:28 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:29.350 12:43:28 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.350 12:43:28 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.350 [2024-12-14 12:43:28.849178] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:29.350 [2024-12-14 12:43:28.849211] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:29.350 [2024-12-14 12:43:28.849219] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:29.350 1 00:17:29.350 [2024-12-14 12:43:28.857099] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:29.350 [2024-12-14 12:43:28.857118] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:29.350 12:43:28 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.350 12:43:28 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75850 00:17:30.356 [2024-12-14 12:43:29.857143] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:30.356 [2024-12-14 12:43:29.865078] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:30.356 [2024-12-14 12:43:29.865094] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:31.299 [2024-12-14 12:43:30.865115] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:31.299 [2024-12-14 12:43:30.869078] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:31.299 [2024-12-14 12:43:30.869092] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:32.240 [2024-12-14 12:43:31.869106] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:32.241 [2024-12-14 12:43:31.877076] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:32.241 [2024-12-14 12:43:31.877090] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:32.241 [2024-12-14 12:43:31.877098] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:32.241 [2024-12-14 12:43:31.877159] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:54.184 [2024-12-14 12:43:52.952091] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:54.184 [2024-12-14 12:43:52.957859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:54.184 [2024-12-14 12:43:52.964258] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:54.184 [2024-12-14 12:43:52.964279] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:20.729 00:18:20.729 fio_test: (groupid=0, jobs=1): err= 0: pid=75853: Sat Dec 14 12:44:18 2024 00:18:20.729 read: IOPS=13.6k, BW=53.2MiB/s (55.8MB/s)(3194MiB/60005msec) 00:18:20.729 slat (nsec): min=1032, max=321855, avg=5403.33, stdev=1609.90 00:18:20.729 clat (usec): min=589, max=30123k, avg=4354.47, stdev=249267.80 00:18:20.729 lat (usec): min=594, max=30123k, avg=4359.87, stdev=249267.79 00:18:20.729 clat percentiles (usec): 00:18:20.729 | 1.00th=[ 1844], 5.00th=[ 1926], 10.00th=[ 1975], 20.00th=[ 2073], 00:18:20.729 | 30.00th=[ 2114], 40.00th=[ 2147], 50.00th=[ 2147], 60.00th=[ 2180], 00:18:20.729 | 70.00th=[ 2180], 80.00th=[ 2212], 90.00th=[ 2311], 95.00th=[ 3425], 00:18:20.729 | 99.00th=[ 5735], 99.50th=[ 6063], 99.90th=[ 7963], 99.95th=[12387], 00:18:20.729 | 99.99th=[13304] 00:18:20.729 bw ( KiB/s): min=10312, max=124208, per=100.00%, avg=107297.38, stdev=18892.22, samples=60 00:18:20.729 iops : min= 2578, max=31052, avg=26824.33, stdev=4723.05, samples=60 00:18:20.729 write: IOPS=13.6k, BW=53.2MiB/s (55.7MB/s)(3190MiB/60005msec); 0 zone resets 00:18:20.729 slat (nsec): min=1078, max=194710, avg=5600.43, stdev=1550.14 00:18:20.729 clat (usec): min=608, max=30124k, avg=5033.70, stdev=282812.46 00:18:20.729 lat (usec): min=612, max=30124k, avg=5039.30, stdev=282812.45 00:18:20.729 clat percentiles (usec): 00:18:20.729 | 1.00th=[ 1909], 5.00th=[ 2024], 10.00th=[ 2057], 20.00th=[ 2180], 00:18:20.729 | 30.00th=[ 2212], 40.00th=[ 2245], 50.00th=[ 2245], 60.00th=[ 2278], 00:18:20.729 | 70.00th=[ 2278], 80.00th=[ 2311], 90.00th=[ 2376], 95.00th=[ 3359], 00:18:20.729 | 99.00th=[ 5800], 99.50th=[ 6194], 99.90th=[ 8029], 99.95th=[12518], 00:18:20.729 | 99.99th=[13435] 00:18:20.729 bw ( KiB/s): min=10448, max=124399, per=100.00%, avg=107133.32, stdev=18978.37, samples=60 00:18:20.729 iops : min= 2612, max=31099, avg=26783.32, stdev=4744.58, samples=60 00:18:20.729 lat (usec) : 750=0.01%, 1000=0.01% 00:18:20.729 lat (msec) : 2=8.37%, 4=87.95%, 10=3.62%, 20=0.05%, >=2000=0.01% 00:18:20.729 cpu : usr=3.16%, sys=15.07%, ctx=53922, majf=0, minf=14 00:18:20.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:20.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.729 issued rwts: total=817592,816601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.729 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.729 00:18:20.729 Run status group 0 (all jobs): 00:18:20.729 READ: bw=53.2MiB/s (55.8MB/s), 53.2MiB/s-53.2MiB/s (55.8MB/s-55.8MB/s), io=3194MiB (3349MB), run=60005-60005msec 00:18:20.729 WRITE: bw=53.2MiB/s (55.7MB/s), 53.2MiB/s-53.2MiB/s (55.7MB/s-55.7MB/s), io=3190MiB (3345MB), run=60005-60005msec 00:18:20.729 00:18:20.729 Disk stats (read/write): 00:18:20.729 ublkb1: ios=814655/813625, merge=0/0, ticks=3511000/3992201, in_queue=7503201, util=99.87% 00:18:20.729 12:44:18 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.729 [2024-12-14 12:44:18.097001] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:20.729 [2024-12-14 12:44:18.132206] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:20.729 [2024-12-14 12:44:18.132357] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:20.729 [2024-12-14 12:44:18.142087] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:20.729 [2024-12-14 12:44:18.142193] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:20.729 [2024-12-14 12:44:18.142200] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.729 12:44:18 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.729 [2024-12-14 12:44:18.158169] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:20.729 [2024-12-14 12:44:18.166077] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:20.729 [2024-12-14 12:44:18.166109] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.729 12:44:18 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:20.729 12:44:18 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:20.729 12:44:18 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75955 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75955 ']' 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75955 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75955 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.729 killing process with pid 75955 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75955' 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75955 00:18:20.729 12:44:18 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75955 00:18:20.729 [2024-12-14 12:44:19.243528] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:20.729 [2024-12-14 12:44:19.243583] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:20.729 00:18:20.729 real 1m4.407s 00:18:20.729 user 1m46.365s 00:18:20.729 sys 0m22.602s 00:18:20.729 12:44:19 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.729 ************************************ 00:18:20.729 END TEST ublk_recovery 00:18:20.729 ************************************ 00:18:20.729 12:44:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.729 12:44:20 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:20.729 12:44:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:20.729 12:44:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:20.729 12:44:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:20.729 12:44:20 -- common/autotest_common.sh@10 -- # set +x 00:18:20.729 12:44:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:20.729 12:44:20 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:20.729 12:44:20 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:20.729 12:44:20 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:20.729 12:44:20 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:20.729 12:44:20 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:20.729 12:44:20 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:20.729 12:44:20 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:20.729 12:44:20 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:20.729 12:44:20 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:20.729 12:44:20 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:20.729 12:44:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:20.729 12:44:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.729 12:44:20 -- common/autotest_common.sh@10 -- # set +x 00:18:20.729 ************************************ 00:18:20.729 START TEST ftl 00:18:20.729 ************************************ 00:18:20.729 12:44:20 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:20.729 * Looking for test storage... 00:18:20.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.729 12:44:20 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:20.730 12:44:20 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:18:20.730 12:44:20 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:20.730 12:44:20 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:20.730 12:44:20 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:20.730 12:44:20 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:20.730 12:44:20 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:20.730 12:44:20 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.730 12:44:20 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:20.730 12:44:20 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:20.730 12:44:20 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:20.730 12:44:20 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:20.730 12:44:20 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:20.730 12:44:20 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:20.730 12:44:20 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:20.730 12:44:20 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:20.730 12:44:20 ftl -- scripts/common.sh@345 -- # : 1 00:18:20.730 12:44:20 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:20.730 12:44:20 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.730 12:44:20 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:20.730 12:44:20 ftl -- scripts/common.sh@353 -- # local d=1 00:18:20.730 12:44:20 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.730 12:44:20 ftl -- scripts/common.sh@355 -- # echo 1 00:18:20.730 12:44:20 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:20.730 12:44:20 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:20.730 12:44:20 ftl -- scripts/common.sh@353 -- # local d=2 00:18:20.730 12:44:20 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.730 12:44:20 ftl -- scripts/common.sh@355 -- # echo 2 00:18:20.730 12:44:20 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:20.730 12:44:20 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:20.730 12:44:20 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:20.730 12:44:20 ftl -- scripts/common.sh@368 -- # return 0 00:18:20.730 12:44:20 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.730 12:44:20 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:20.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.730 --rc genhtml_branch_coverage=1 00:18:20.730 --rc genhtml_function_coverage=1 00:18:20.730 --rc genhtml_legend=1 00:18:20.730 --rc geninfo_all_blocks=1 00:18:20.730 --rc geninfo_unexecuted_blocks=1 00:18:20.730 00:18:20.730 ' 00:18:20.730 12:44:20 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:20.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.730 --rc genhtml_branch_coverage=1 00:18:20.730 --rc genhtml_function_coverage=1 00:18:20.730 --rc genhtml_legend=1 00:18:20.730 --rc geninfo_all_blocks=1 00:18:20.730 --rc geninfo_unexecuted_blocks=1 00:18:20.730 00:18:20.730 ' 00:18:20.730 12:44:20 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:20.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.730 --rc genhtml_branch_coverage=1 00:18:20.730 --rc genhtml_function_coverage=1 00:18:20.730 --rc genhtml_legend=1 00:18:20.730 --rc geninfo_all_blocks=1 00:18:20.730 --rc geninfo_unexecuted_blocks=1 00:18:20.730 00:18:20.730 ' 00:18:20.730 12:44:20 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:20.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.730 --rc genhtml_branch_coverage=1 00:18:20.730 --rc genhtml_function_coverage=1 00:18:20.730 --rc genhtml_legend=1 00:18:20.730 --rc geninfo_all_blocks=1 00:18:20.730 --rc geninfo_unexecuted_blocks=1 00:18:20.730 00:18:20.730 ' 00:18:20.730 12:44:20 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:20.730 12:44:20 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:20.730 12:44:20 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.730 12:44:20 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.730 12:44:20 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:20.730 12:44:20 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:20.730 12:44:20 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.730 12:44:20 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:20.730 12:44:20 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:20.730 12:44:20 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.730 12:44:20 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.730 12:44:20 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:20.730 12:44:20 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:20.730 12:44:20 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:20.730 12:44:20 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:20.730 12:44:20 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:20.730 12:44:20 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:20.730 12:44:20 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.730 12:44:20 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.730 12:44:20 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:20.730 12:44:20 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:20.730 12:44:20 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:20.730 12:44:20 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:20.730 12:44:20 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:20.730 12:44:20 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:20.730 12:44:20 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:20.730 12:44:20 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:20.730 12:44:20 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:20.730 12:44:20 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:20.730 12:44:20 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.730 12:44:20 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:20.730 12:44:20 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:20.730 12:44:20 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:20.730 12:44:20 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:20.730 12:44:20 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:20.989 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:20.989 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:20.989 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:20.989 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:20.989 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:21.249 12:44:20 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76759 00:18:21.249 12:44:20 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:21.249 12:44:20 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76759 00:18:21.249 12:44:20 ftl -- common/autotest_common.sh@835 -- # '[' -z 76759 ']' 00:18:21.249 12:44:20 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.249 12:44:20 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.249 12:44:20 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.249 12:44:20 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.249 12:44:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:21.249 [2024-12-14 12:44:20.830869] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:21.249 [2024-12-14 12:44:20.831662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76759 ] 00:18:21.507 [2024-12-14 12:44:20.992338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.507 [2024-12-14 12:44:21.097838] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.074 12:44:21 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.074 12:44:21 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:22.074 12:44:21 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:22.332 12:44:21 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:22.899 12:44:22 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:22.899 12:44:22 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:23.466 12:44:23 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:23.466 12:44:23 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:23.466 12:44:23 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:23.724 12:44:23 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:23.724 12:44:23 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:23.724 12:44:23 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:23.724 12:44:23 ftl -- ftl/ftl.sh@50 -- # break 00:18:23.724 12:44:23 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:23.724 12:44:23 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:23.724 12:44:23 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:23.724 12:44:23 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:23.724 12:44:23 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:23.724 12:44:23 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:23.724 12:44:23 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:23.724 12:44:23 ftl -- ftl/ftl.sh@63 -- # break 00:18:23.724 12:44:23 ftl -- ftl/ftl.sh@66 -- # killprocess 76759 00:18:23.724 12:44:23 ftl -- common/autotest_common.sh@954 -- # '[' -z 76759 ']' 00:18:23.724 12:44:23 ftl -- common/autotest_common.sh@958 -- # kill -0 76759 00:18:23.724 12:44:23 ftl -- common/autotest_common.sh@959 -- # uname 00:18:23.724 12:44:23 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.724 12:44:23 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76759 00:18:23.983 killing process with pid 76759 00:18:23.983 12:44:23 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.983 12:44:23 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.983 12:44:23 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76759' 00:18:23.983 12:44:23 ftl -- common/autotest_common.sh@973 -- # kill 76759 00:18:23.983 12:44:23 ftl -- common/autotest_common.sh@978 -- # wait 76759 00:18:25.361 12:44:24 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:25.361 12:44:24 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:25.361 12:44:24 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:25.361 12:44:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.361 12:44:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:25.361 ************************************ 00:18:25.361 START TEST ftl_fio_basic 00:18:25.361 ************************************ 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:25.361 * Looking for test storage... 00:18:25.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.361 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:25.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.361 --rc genhtml_branch_coverage=1 00:18:25.361 --rc genhtml_function_coverage=1 00:18:25.361 --rc genhtml_legend=1 00:18:25.361 --rc geninfo_all_blocks=1 00:18:25.361 --rc geninfo_unexecuted_blocks=1 00:18:25.361 00:18:25.361 ' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:25.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.362 --rc genhtml_branch_coverage=1 00:18:25.362 --rc genhtml_function_coverage=1 00:18:25.362 --rc genhtml_legend=1 00:18:25.362 --rc geninfo_all_blocks=1 00:18:25.362 --rc geninfo_unexecuted_blocks=1 00:18:25.362 00:18:25.362 ' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:25.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.362 --rc genhtml_branch_coverage=1 00:18:25.362 --rc genhtml_function_coverage=1 00:18:25.362 --rc genhtml_legend=1 00:18:25.362 --rc geninfo_all_blocks=1 00:18:25.362 --rc geninfo_unexecuted_blocks=1 00:18:25.362 00:18:25.362 ' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:25.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.362 --rc genhtml_branch_coverage=1 00:18:25.362 --rc genhtml_function_coverage=1 00:18:25.362 --rc genhtml_legend=1 00:18:25.362 --rc geninfo_all_blocks=1 00:18:25.362 --rc geninfo_unexecuted_blocks=1 00:18:25.362 00:18:25.362 ' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76887 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76887 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76887 ']' 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.362 12:44:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:25.362 [2024-12-14 12:44:24.972338] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:25.362 [2024-12-14 12:44:24.972669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76887 ] 00:18:25.622 [2024-12-14 12:44:25.136773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:25.622 [2024-12-14 12:44:25.289644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.622 [2024-12-14 12:44:25.290006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.622 [2024-12-14 12:44:25.290029] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.565 12:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.566 12:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:26.566 12:44:26 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:26.566 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:26.566 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:26.566 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:26.566 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:26.566 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:26.827 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:26.827 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:26.827 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:26.827 12:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:26.827 12:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:26.827 12:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:26.827 12:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:26.827 12:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:27.089 12:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:27.089 { 00:18:27.089 "name": "nvme0n1", 00:18:27.089 "aliases": [ 00:18:27.089 "00e95889-04b6-4d99-9b57-3dfba2db5cc6" 00:18:27.089 ], 00:18:27.089 "product_name": "NVMe disk", 00:18:27.089 "block_size": 4096, 00:18:27.089 "num_blocks": 1310720, 00:18:27.089 "uuid": "00e95889-04b6-4d99-9b57-3dfba2db5cc6", 00:18:27.089 "numa_id": -1, 00:18:27.089 "assigned_rate_limits": { 00:18:27.089 "rw_ios_per_sec": 0, 00:18:27.089 "rw_mbytes_per_sec": 0, 00:18:27.089 "r_mbytes_per_sec": 0, 00:18:27.089 "w_mbytes_per_sec": 0 00:18:27.089 }, 00:18:27.089 "claimed": false, 00:18:27.089 "zoned": false, 00:18:27.089 "supported_io_types": { 00:18:27.089 "read": true, 00:18:27.089 "write": true, 00:18:27.089 "unmap": true, 00:18:27.089 "flush": true, 00:18:27.089 "reset": true, 00:18:27.089 "nvme_admin": true, 00:18:27.089 "nvme_io": true, 00:18:27.089 "nvme_io_md": false, 00:18:27.089 "write_zeroes": true, 00:18:27.089 "zcopy": false, 00:18:27.089 "get_zone_info": false, 00:18:27.089 "zone_management": false, 00:18:27.089 "zone_append": false, 00:18:27.089 "compare": true, 00:18:27.089 "compare_and_write": false, 00:18:27.089 "abort": true, 00:18:27.089 "seek_hole": false, 00:18:27.089 "seek_data": false, 00:18:27.089 "copy": true, 00:18:27.089 "nvme_iov_md": false 00:18:27.089 }, 00:18:27.089 "driver_specific": { 00:18:27.089 "nvme": [ 00:18:27.089 { 00:18:27.089 "pci_address": "0000:00:11.0", 00:18:27.089 "trid": { 00:18:27.089 "trtype": "PCIe", 00:18:27.089 "traddr": "0000:00:11.0" 00:18:27.089 }, 00:18:27.089 "ctrlr_data": { 00:18:27.089 "cntlid": 0, 00:18:27.089 "vendor_id": "0x1b36", 00:18:27.089 "model_number": "QEMU NVMe Ctrl", 00:18:27.089 "serial_number": "12341", 00:18:27.089 "firmware_revision": "8.0.0", 00:18:27.089 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:27.089 "oacs": { 00:18:27.089 "security": 0, 00:18:27.089 "format": 1, 00:18:27.089 "firmware": 0, 00:18:27.089 "ns_manage": 1 00:18:27.089 }, 00:18:27.089 "multi_ctrlr": false, 00:18:27.089 "ana_reporting": false 00:18:27.089 }, 00:18:27.089 "vs": { 00:18:27.089 "nvme_version": "1.4" 00:18:27.089 }, 00:18:27.089 "ns_data": { 00:18:27.089 "id": 1, 00:18:27.089 "can_share": false 00:18:27.089 } 00:18:27.089 } 00:18:27.089 ], 00:18:27.089 "mp_policy": "active_passive" 00:18:27.089 } 00:18:27.089 } 00:18:27.089 ]' 00:18:27.089 12:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:27.089 12:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:27.089 12:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:27.089 12:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:27.089 12:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:27.089 12:44:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:27.089 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:27.089 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:27.089 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:27.089 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:27.089 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:27.351 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:27.351 12:44:26 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:27.612 12:44:27 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=95cb178f-64a0-4bfb-9f9f-f831abdc3f53 00:18:27.612 12:44:27 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 95cb178f-64a0-4bfb-9f9f-f831abdc3f53 00:18:27.612 12:44:27 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=efc80ae3-f87e-4aec-8e20-8de0b91f430f 00:18:27.612 12:44:27 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 efc80ae3-f87e-4aec-8e20-8de0b91f430f 00:18:27.612 12:44:27 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:27.612 12:44:27 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:27.612 12:44:27 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=efc80ae3-f87e-4aec-8e20-8de0b91f430f 00:18:27.612 12:44:27 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:27.612 12:44:27 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size efc80ae3-f87e-4aec-8e20-8de0b91f430f 00:18:27.612 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=efc80ae3-f87e-4aec-8e20-8de0b91f430f 00:18:27.612 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:27.612 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:27.612 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:27.612 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b efc80ae3-f87e-4aec-8e20-8de0b91f430f 00:18:27.872 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:27.872 { 00:18:27.872 "name": "efc80ae3-f87e-4aec-8e20-8de0b91f430f", 00:18:27.872 "aliases": [ 00:18:27.872 "lvs/nvme0n1p0" 00:18:27.872 ], 00:18:27.872 "product_name": "Logical Volume", 00:18:27.872 "block_size": 4096, 00:18:27.872 "num_blocks": 26476544, 00:18:27.872 "uuid": "efc80ae3-f87e-4aec-8e20-8de0b91f430f", 00:18:27.872 "assigned_rate_limits": { 00:18:27.872 "rw_ios_per_sec": 0, 00:18:27.872 "rw_mbytes_per_sec": 0, 00:18:27.872 "r_mbytes_per_sec": 0, 00:18:27.872 "w_mbytes_per_sec": 0 00:18:27.872 }, 00:18:27.872 "claimed": false, 00:18:27.872 "zoned": false, 00:18:27.872 "supported_io_types": { 00:18:27.872 "read": true, 00:18:27.872 "write": true, 00:18:27.872 "unmap": true, 00:18:27.872 "flush": false, 00:18:27.872 "reset": true, 00:18:27.872 "nvme_admin": false, 00:18:27.872 "nvme_io": false, 00:18:27.872 "nvme_io_md": false, 00:18:27.872 "write_zeroes": true, 00:18:27.872 "zcopy": false, 00:18:27.872 "get_zone_info": false, 00:18:27.872 "zone_management": false, 00:18:27.872 "zone_append": false, 00:18:27.872 "compare": false, 00:18:27.872 "compare_and_write": false, 00:18:27.872 "abort": false, 00:18:27.872 "seek_hole": true, 00:18:27.872 "seek_data": true, 00:18:27.872 "copy": false, 00:18:27.872 "nvme_iov_md": false 00:18:27.872 }, 00:18:27.872 "driver_specific": { 00:18:27.872 "lvol": { 00:18:27.872 "lvol_store_uuid": "95cb178f-64a0-4bfb-9f9f-f831abdc3f53", 00:18:27.872 "base_bdev": "nvme0n1", 00:18:27.872 "thin_provision": true, 00:18:27.872 "num_allocated_clusters": 0, 00:18:27.872 "snapshot": false, 00:18:27.872 "clone": false, 00:18:27.872 "esnap_clone": false 00:18:27.872 } 00:18:27.872 } 00:18:27.872 } 00:18:27.872 ]' 00:18:27.872 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:27.872 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:27.872 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:27.872 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:27.872 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:27.872 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:27.872 12:44:27 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:27.872 12:44:27 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:27.872 12:44:27 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:28.133 12:44:27 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:28.133 12:44:27 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:28.133 12:44:27 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size efc80ae3-f87e-4aec-8e20-8de0b91f430f 00:18:28.133 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=efc80ae3-f87e-4aec-8e20-8de0b91f430f 00:18:28.133 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:28.133 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:28.133 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:28.133 12:44:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b efc80ae3-f87e-4aec-8e20-8de0b91f430f 00:18:28.393 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:28.393 { 00:18:28.393 "name": "efc80ae3-f87e-4aec-8e20-8de0b91f430f", 00:18:28.393 "aliases": [ 00:18:28.393 "lvs/nvme0n1p0" 00:18:28.393 ], 00:18:28.393 "product_name": "Logical Volume", 00:18:28.393 "block_size": 4096, 00:18:28.393 "num_blocks": 26476544, 00:18:28.393 "uuid": "efc80ae3-f87e-4aec-8e20-8de0b91f430f", 00:18:28.393 "assigned_rate_limits": { 00:18:28.393 "rw_ios_per_sec": 0, 00:18:28.393 "rw_mbytes_per_sec": 0, 00:18:28.393 "r_mbytes_per_sec": 0, 00:18:28.393 "w_mbytes_per_sec": 0 00:18:28.393 }, 00:18:28.393 "claimed": false, 00:18:28.393 "zoned": false, 00:18:28.393 "supported_io_types": { 00:18:28.393 "read": true, 00:18:28.393 "write": true, 00:18:28.393 "unmap": true, 00:18:28.393 "flush": false, 00:18:28.393 "reset": true, 00:18:28.393 "nvme_admin": false, 00:18:28.393 "nvme_io": false, 00:18:28.393 "nvme_io_md": false, 00:18:28.393 "write_zeroes": true, 00:18:28.393 "zcopy": false, 00:18:28.393 "get_zone_info": false, 00:18:28.393 "zone_management": false, 00:18:28.393 "zone_append": false, 00:18:28.393 "compare": false, 00:18:28.393 "compare_and_write": false, 00:18:28.393 "abort": false, 00:18:28.393 "seek_hole": true, 00:18:28.393 "seek_data": true, 00:18:28.393 "copy": false, 00:18:28.393 "nvme_iov_md": false 00:18:28.393 }, 00:18:28.394 "driver_specific": { 00:18:28.394 "lvol": { 00:18:28.394 "lvol_store_uuid": "95cb178f-64a0-4bfb-9f9f-f831abdc3f53", 00:18:28.394 "base_bdev": "nvme0n1", 00:18:28.394 "thin_provision": true, 00:18:28.394 "num_allocated_clusters": 0, 00:18:28.394 "snapshot": false, 00:18:28.394 "clone": false, 00:18:28.394 "esnap_clone": false 00:18:28.394 } 00:18:28.394 } 00:18:28.394 } 00:18:28.394 ]' 00:18:28.394 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:28.394 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:28.394 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:28.394 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:28.394 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:28.394 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:28.394 12:44:28 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:28.394 12:44:28 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:28.652 12:44:28 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:28.652 12:44:28 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:28.652 12:44:28 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:28.652 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:28.652 12:44:28 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size efc80ae3-f87e-4aec-8e20-8de0b91f430f 00:18:28.652 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=efc80ae3-f87e-4aec-8e20-8de0b91f430f 00:18:28.652 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:28.652 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:28.652 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:28.652 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b efc80ae3-f87e-4aec-8e20-8de0b91f430f 00:18:28.911 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:28.911 { 00:18:28.911 "name": "efc80ae3-f87e-4aec-8e20-8de0b91f430f", 00:18:28.911 "aliases": [ 00:18:28.911 "lvs/nvme0n1p0" 00:18:28.911 ], 00:18:28.911 "product_name": "Logical Volume", 00:18:28.911 "block_size": 4096, 00:18:28.911 "num_blocks": 26476544, 00:18:28.911 "uuid": "efc80ae3-f87e-4aec-8e20-8de0b91f430f", 00:18:28.911 "assigned_rate_limits": { 00:18:28.911 "rw_ios_per_sec": 0, 00:18:28.911 "rw_mbytes_per_sec": 0, 00:18:28.911 "r_mbytes_per_sec": 0, 00:18:28.911 "w_mbytes_per_sec": 0 00:18:28.911 }, 00:18:28.911 "claimed": false, 00:18:28.911 "zoned": false, 00:18:28.911 "supported_io_types": { 00:18:28.911 "read": true, 00:18:28.911 "write": true, 00:18:28.911 "unmap": true, 00:18:28.911 "flush": false, 00:18:28.911 "reset": true, 00:18:28.911 "nvme_admin": false, 00:18:28.911 "nvme_io": false, 00:18:28.911 "nvme_io_md": false, 00:18:28.911 "write_zeroes": true, 00:18:28.911 "zcopy": false, 00:18:28.911 "get_zone_info": false, 00:18:28.911 "zone_management": false, 00:18:28.911 "zone_append": false, 00:18:28.911 "compare": false, 00:18:28.911 "compare_and_write": false, 00:18:28.911 "abort": false, 00:18:28.911 "seek_hole": true, 00:18:28.911 "seek_data": true, 00:18:28.911 "copy": false, 00:18:28.911 "nvme_iov_md": false 00:18:28.911 }, 00:18:28.911 "driver_specific": { 00:18:28.911 "lvol": { 00:18:28.911 "lvol_store_uuid": "95cb178f-64a0-4bfb-9f9f-f831abdc3f53", 00:18:28.911 "base_bdev": "nvme0n1", 00:18:28.911 "thin_provision": true, 00:18:28.911 "num_allocated_clusters": 0, 00:18:28.911 "snapshot": false, 00:18:28.911 "clone": false, 00:18:28.911 "esnap_clone": false 00:18:28.911 } 00:18:28.911 } 00:18:28.911 } 00:18:28.911 ]' 00:18:28.911 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:28.911 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:28.911 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:28.911 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:28.911 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:28.911 12:44:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:28.911 12:44:28 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:28.911 12:44:28 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:28.911 12:44:28 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d efc80ae3-f87e-4aec-8e20-8de0b91f430f -c nvc0n1p0 --l2p_dram_limit 60 00:18:29.173 [2024-12-14 12:44:28.736645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.173 [2024-12-14 12:44:28.736769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:29.173 [2024-12-14 12:44:28.736789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:29.173 [2024-12-14 12:44:28.736797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.173 [2024-12-14 12:44:28.736851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.173 [2024-12-14 12:44:28.736861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:29.173 [2024-12-14 12:44:28.736872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:29.173 [2024-12-14 12:44:28.736879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.173 [2024-12-14 12:44:28.736914] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:29.173 [2024-12-14 12:44:28.737480] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:29.173 [2024-12-14 12:44:28.737504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.173 [2024-12-14 12:44:28.737510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:29.173 [2024-12-14 12:44:28.737519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:18:29.173 [2024-12-14 12:44:28.737526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.173 [2024-12-14 12:44:28.737737] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5ad70e3a-35d1-4761-b8bb-0ef1df32d0d3 00:18:29.173 [2024-12-14 12:44:28.739107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.173 [2024-12-14 12:44:28.739136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:29.173 [2024-12-14 12:44:28.739145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:29.173 [2024-12-14 12:44:28.739153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.173 [2024-12-14 12:44:28.745931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.173 [2024-12-14 12:44:28.745959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:29.173 [2024-12-14 12:44:28.745967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.706 ms 00:18:29.173 [2024-12-14 12:44:28.745975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.173 [2024-12-14 12:44:28.746071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.173 [2024-12-14 12:44:28.746081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:29.173 [2024-12-14 12:44:28.746088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:29.173 [2024-12-14 12:44:28.746099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.173 [2024-12-14 12:44:28.746141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.173 [2024-12-14 12:44:28.746152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:29.173 [2024-12-14 12:44:28.746158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:29.173 [2024-12-14 12:44:28.746166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.173 [2024-12-14 12:44:28.746191] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:29.173 [2024-12-14 12:44:28.749373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.173 [2024-12-14 12:44:28.749395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:29.173 [2024-12-14 12:44:28.749405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.183 ms 00:18:29.173 [2024-12-14 12:44:28.749413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.173 [2024-12-14 12:44:28.749453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.173 [2024-12-14 12:44:28.749462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:29.173 [2024-12-14 12:44:28.749469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:29.173 [2024-12-14 12:44:28.749475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.173 [2024-12-14 12:44:28.749503] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:29.173 [2024-12-14 12:44:28.749629] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:29.173 [2024-12-14 12:44:28.749643] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:29.173 [2024-12-14 12:44:28.749652] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:29.173 [2024-12-14 12:44:28.749662] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:29.173 [2024-12-14 12:44:28.749669] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:29.173 [2024-12-14 12:44:28.749677] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:29.173 [2024-12-14 12:44:28.749684] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:29.173 [2024-12-14 12:44:28.749691] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:29.173 [2024-12-14 12:44:28.749697] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:29.173 [2024-12-14 12:44:28.749705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.173 [2024-12-14 12:44:28.749719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:29.173 [2024-12-14 12:44:28.749727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:18:29.173 [2024-12-14 12:44:28.749733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.173 [2024-12-14 12:44:28.749812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.173 [2024-12-14 12:44:28.749819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:29.173 [2024-12-14 12:44:28.749827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:29.173 [2024-12-14 12:44:28.749833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.173 [2024-12-14 12:44:28.749927] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:29.173 [2024-12-14 12:44:28.749936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:29.173 [2024-12-14 12:44:28.749945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:29.173 [2024-12-14 12:44:28.749951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.173 [2024-12-14 12:44:28.749960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:29.173 [2024-12-14 12:44:28.749965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:29.173 [2024-12-14 12:44:28.749975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:29.173 [2024-12-14 12:44:28.749981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:29.173 [2024-12-14 12:44:28.749989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:29.173 [2024-12-14 12:44:28.749994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:29.173 [2024-12-14 12:44:28.750000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:29.173 [2024-12-14 12:44:28.750007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:29.173 [2024-12-14 12:44:28.750014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:29.173 [2024-12-14 12:44:28.750019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:29.173 [2024-12-14 12:44:28.750025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:29.173 [2024-12-14 12:44:28.750030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.173 [2024-12-14 12:44:28.750038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:29.173 [2024-12-14 12:44:28.750044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:29.173 [2024-12-14 12:44:28.750051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.173 [2024-12-14 12:44:28.750071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:29.173 [2024-12-14 12:44:28.750079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:29.173 [2024-12-14 12:44:28.750084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.174 [2024-12-14 12:44:28.750091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:29.174 [2024-12-14 12:44:28.750096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:29.174 [2024-12-14 12:44:28.750103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.174 [2024-12-14 12:44:28.750109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:29.174 [2024-12-14 12:44:28.750116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:29.174 [2024-12-14 12:44:28.750121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.174 [2024-12-14 12:44:28.750128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:29.174 [2024-12-14 12:44:28.750133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:29.174 [2024-12-14 12:44:28.750140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:29.174 [2024-12-14 12:44:28.750146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:29.174 [2024-12-14 12:44:28.750155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:29.174 [2024-12-14 12:44:28.750175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:29.174 [2024-12-14 12:44:28.750182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:29.174 [2024-12-14 12:44:28.750187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:29.174 [2024-12-14 12:44:28.750200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:29.174 [2024-12-14 12:44:28.750205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:29.174 [2024-12-14 12:44:28.750213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:29.174 [2024-12-14 12:44:28.750219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.174 [2024-12-14 12:44:28.750227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:29.174 [2024-12-14 12:44:28.750233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:29.174 [2024-12-14 12:44:28.750239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.174 [2024-12-14 12:44:28.750244] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:29.174 [2024-12-14 12:44:28.750252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:29.174 [2024-12-14 12:44:28.750258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:29.174 [2024-12-14 12:44:28.750265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:29.174 [2024-12-14 12:44:28.750271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:29.174 [2024-12-14 12:44:28.750279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:29.174 [2024-12-14 12:44:28.750285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:29.174 [2024-12-14 12:44:28.750292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:29.174 [2024-12-14 12:44:28.750297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:29.174 [2024-12-14 12:44:28.750303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:29.174 [2024-12-14 12:44:28.750311] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:29.174 [2024-12-14 12:44:28.750320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:29.174 [2024-12-14 12:44:28.750326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:29.174 [2024-12-14 12:44:28.750333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:29.174 [2024-12-14 12:44:28.750339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:29.174 [2024-12-14 12:44:28.750346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:29.174 [2024-12-14 12:44:28.750351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:29.174 [2024-12-14 12:44:28.750359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:29.174 [2024-12-14 12:44:28.750364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:29.174 [2024-12-14 12:44:28.750372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:29.174 [2024-12-14 12:44:28.750378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:29.174 [2024-12-14 12:44:28.750388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:29.174 [2024-12-14 12:44:28.750393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:29.174 [2024-12-14 12:44:28.750400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:29.174 [2024-12-14 12:44:28.750406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:29.174 [2024-12-14 12:44:28.750413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:29.174 [2024-12-14 12:44:28.750418] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:29.174 [2024-12-14 12:44:28.750428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:29.174 [2024-12-14 12:44:28.750437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:29.174 [2024-12-14 12:44:28.750444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:29.174 [2024-12-14 12:44:28.750449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:29.174 [2024-12-14 12:44:28.750456] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:29.174 [2024-12-14 12:44:28.750462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.174 [2024-12-14 12:44:28.750470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:29.174 [2024-12-14 12:44:28.750475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:18:29.174 [2024-12-14 12:44:28.750483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.174 [2024-12-14 12:44:28.750550] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:29.174 [2024-12-14 12:44:28.750562] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:31.720 [2024-12-14 12:44:31.136609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.720 [2024-12-14 12:44:31.136677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:31.720 [2024-12-14 12:44:31.136694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2386.049 ms 00:18:31.720 [2024-12-14 12:44:31.136706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.720 [2024-12-14 12:44:31.164706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.720 [2024-12-14 12:44:31.164754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:31.720 [2024-12-14 12:44:31.164767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.782 ms 00:18:31.720 [2024-12-14 12:44:31.164777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.720 [2024-12-14 12:44:31.164914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.720 [2024-12-14 12:44:31.164928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:31.720 [2024-12-14 12:44:31.164936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:31.720 [2024-12-14 12:44:31.164948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.720 [2024-12-14 12:44:31.208578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.720 [2024-12-14 12:44:31.208622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:31.720 [2024-12-14 12:44:31.208637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.575 ms 00:18:31.720 [2024-12-14 12:44:31.208648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.720 [2024-12-14 12:44:31.208695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.720 [2024-12-14 12:44:31.208706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:31.720 [2024-12-14 12:44:31.208715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:31.720 [2024-12-14 12:44:31.208726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.720 [2024-12-14 12:44:31.209185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.720 [2024-12-14 12:44:31.209216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:31.720 [2024-12-14 12:44:31.209226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:18:31.720 [2024-12-14 12:44:31.209240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.720 [2024-12-14 12:44:31.209367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.720 [2024-12-14 12:44:31.209379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:31.721 [2024-12-14 12:44:31.209388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:18:31.721 [2024-12-14 12:44:31.209400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.721 [2024-12-14 12:44:31.225365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.721 [2024-12-14 12:44:31.225396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:31.721 [2024-12-14 12:44:31.225406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.933 ms 00:18:31.721 [2024-12-14 12:44:31.225415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.721 [2024-12-14 12:44:31.237545] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:31.721 [2024-12-14 12:44:31.254625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.721 [2024-12-14 12:44:31.254833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:31.721 [2024-12-14 12:44:31.254854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.119 ms 00:18:31.721 [2024-12-14 12:44:31.254864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.721 [2024-12-14 12:44:31.304558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.721 [2024-12-14 12:44:31.304592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:31.721 [2024-12-14 12:44:31.304608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.653 ms 00:18:31.721 [2024-12-14 12:44:31.304616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.721 [2024-12-14 12:44:31.304808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.721 [2024-12-14 12:44:31.304826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:31.721 [2024-12-14 12:44:31.304840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:18:31.721 [2024-12-14 12:44:31.304848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.721 [2024-12-14 12:44:31.327642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.721 [2024-12-14 12:44:31.327676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:31.721 [2024-12-14 12:44:31.327689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.752 ms 00:18:31.721 [2024-12-14 12:44:31.327697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.721 [2024-12-14 12:44:31.349820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.721 [2024-12-14 12:44:31.349961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:31.721 [2024-12-14 12:44:31.349980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.079 ms 00:18:31.721 [2024-12-14 12:44:31.349988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.721 [2024-12-14 12:44:31.350571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.721 [2024-12-14 12:44:31.350589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:31.721 [2024-12-14 12:44:31.350600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:18:31.721 [2024-12-14 12:44:31.350608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.721 [2024-12-14 12:44:31.419434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.721 [2024-12-14 12:44:31.419464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:31.721 [2024-12-14 12:44:31.419479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.781 ms 00:18:31.721 [2024-12-14 12:44:31.419490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.721 [2024-12-14 12:44:31.443672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.721 [2024-12-14 12:44:31.443702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:31.721 [2024-12-14 12:44:31.443715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.099 ms 00:18:31.721 [2024-12-14 12:44:31.443723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.982 [2024-12-14 12:44:31.466220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.982 [2024-12-14 12:44:31.466250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:31.982 [2024-12-14 12:44:31.466263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.450 ms 00:18:31.982 [2024-12-14 12:44:31.466270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.982 [2024-12-14 12:44:31.489207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.982 [2024-12-14 12:44:31.489236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:31.982 [2024-12-14 12:44:31.489248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.897 ms 00:18:31.982 [2024-12-14 12:44:31.489255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.982 [2024-12-14 12:44:31.489304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.982 [2024-12-14 12:44:31.489314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:31.982 [2024-12-14 12:44:31.489329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:31.982 [2024-12-14 12:44:31.489336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.982 [2024-12-14 12:44:31.489422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.982 [2024-12-14 12:44:31.489432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:31.982 [2024-12-14 12:44:31.489442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:31.982 [2024-12-14 12:44:31.489449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.982 [2024-12-14 12:44:31.490456] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2753.336 ms, result 0 00:18:31.982 { 00:18:31.982 "name": "ftl0", 00:18:31.982 "uuid": "5ad70e3a-35d1-4761-b8bb-0ef1df32d0d3" 00:18:31.982 } 00:18:31.982 12:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:31.982 12:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:31.982 12:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:31.982 12:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:31.982 12:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:31.982 12:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:31.982 12:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:31.982 12:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:32.243 [ 00:18:32.243 { 00:18:32.243 "name": "ftl0", 00:18:32.243 "aliases": [ 00:18:32.243 "5ad70e3a-35d1-4761-b8bb-0ef1df32d0d3" 00:18:32.243 ], 00:18:32.243 "product_name": "FTL disk", 00:18:32.243 "block_size": 4096, 00:18:32.243 "num_blocks": 20971520, 00:18:32.243 "uuid": "5ad70e3a-35d1-4761-b8bb-0ef1df32d0d3", 00:18:32.243 "assigned_rate_limits": { 00:18:32.243 "rw_ios_per_sec": 0, 00:18:32.243 "rw_mbytes_per_sec": 0, 00:18:32.243 "r_mbytes_per_sec": 0, 00:18:32.243 "w_mbytes_per_sec": 0 00:18:32.243 }, 00:18:32.243 "claimed": false, 00:18:32.243 "zoned": false, 00:18:32.243 "supported_io_types": { 00:18:32.243 "read": true, 00:18:32.243 "write": true, 00:18:32.243 "unmap": true, 00:18:32.243 "flush": true, 00:18:32.243 "reset": false, 00:18:32.243 "nvme_admin": false, 00:18:32.243 "nvme_io": false, 00:18:32.243 "nvme_io_md": false, 00:18:32.243 "write_zeroes": true, 00:18:32.243 "zcopy": false, 00:18:32.243 "get_zone_info": false, 00:18:32.243 "zone_management": false, 00:18:32.243 "zone_append": false, 00:18:32.243 "compare": false, 00:18:32.243 "compare_and_write": false, 00:18:32.243 "abort": false, 00:18:32.243 "seek_hole": false, 00:18:32.243 "seek_data": false, 00:18:32.243 "copy": false, 00:18:32.243 "nvme_iov_md": false 00:18:32.243 }, 00:18:32.243 "driver_specific": { 00:18:32.243 "ftl": { 00:18:32.243 "base_bdev": "efc80ae3-f87e-4aec-8e20-8de0b91f430f", 00:18:32.243 "cache": "nvc0n1p0" 00:18:32.243 } 00:18:32.243 } 00:18:32.243 } 00:18:32.243 ] 00:18:32.243 12:44:31 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:32.243 12:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:32.243 12:44:31 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:32.504 12:44:32 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:32.504 12:44:32 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:32.764 [2024-12-14 12:44:32.307277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.764 [2024-12-14 12:44:32.307315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:32.764 [2024-12-14 12:44:32.307326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:32.764 [2024-12-14 12:44:32.307336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.764 [2024-12-14 12:44:32.307371] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:32.764 [2024-12-14 12:44:32.310262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.764 [2024-12-14 12:44:32.310288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:32.764 [2024-12-14 12:44:32.310301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.871 ms 00:18:32.764 [2024-12-14 12:44:32.310309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.764 [2024-12-14 12:44:32.310751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.764 [2024-12-14 12:44:32.310771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:32.764 [2024-12-14 12:44:32.310781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:18:32.764 [2024-12-14 12:44:32.310789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.764 [2024-12-14 12:44:32.314032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.764 [2024-12-14 12:44:32.314174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:32.764 [2024-12-14 12:44:32.314193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.215 ms 00:18:32.764 [2024-12-14 12:44:32.314203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.764 [2024-12-14 12:44:32.320311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.764 [2024-12-14 12:44:32.320337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:32.764 [2024-12-14 12:44:32.320349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.071 ms 00:18:32.764 [2024-12-14 12:44:32.320358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.765 [2024-12-14 12:44:32.343376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.765 [2024-12-14 12:44:32.343405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:32.765 [2024-12-14 12:44:32.343429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.926 ms 00:18:32.765 [2024-12-14 12:44:32.343436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.765 [2024-12-14 12:44:32.356053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.765 [2024-12-14 12:44:32.356088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:32.765 [2024-12-14 12:44:32.356101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.570 ms 00:18:32.765 [2024-12-14 12:44:32.356107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.765 [2024-12-14 12:44:32.356266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.765 [2024-12-14 12:44:32.356275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:32.765 [2024-12-14 12:44:32.356284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:18:32.765 [2024-12-14 12:44:32.356289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.765 [2024-12-14 12:44:32.373727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.765 [2024-12-14 12:44:32.373828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:32.765 [2024-12-14 12:44:32.373842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.412 ms 00:18:32.765 [2024-12-14 12:44:32.373848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.765 [2024-12-14 12:44:32.390895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.765 [2024-12-14 12:44:32.390918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:32.765 [2024-12-14 12:44:32.390928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.016 ms 00:18:32.765 [2024-12-14 12:44:32.390934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.765 [2024-12-14 12:44:32.407602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.765 [2024-12-14 12:44:32.407626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:32.765 [2024-12-14 12:44:32.407635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.633 ms 00:18:32.765 [2024-12-14 12:44:32.407640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.765 [2024-12-14 12:44:32.424344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.765 [2024-12-14 12:44:32.424368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:32.765 [2024-12-14 12:44:32.424378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.616 ms 00:18:32.765 [2024-12-14 12:44:32.424383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.765 [2024-12-14 12:44:32.424419] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:32.765 [2024-12-14 12:44:32.424430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:32.765 [2024-12-14 12:44:32.424884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.424993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:32.766 [2024-12-14 12:44:32.425145] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:32.766 [2024-12-14 12:44:32.425153] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5ad70e3a-35d1-4761-b8bb-0ef1df32d0d3 00:18:32.766 [2024-12-14 12:44:32.425159] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:32.766 [2024-12-14 12:44:32.425168] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:32.766 [2024-12-14 12:44:32.425173] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:32.766 [2024-12-14 12:44:32.425182] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:32.766 [2024-12-14 12:44:32.425187] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:32.766 [2024-12-14 12:44:32.425195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:32.766 [2024-12-14 12:44:32.425200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:32.766 [2024-12-14 12:44:32.425207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:32.766 [2024-12-14 12:44:32.425212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:32.766 [2024-12-14 12:44:32.425219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.766 [2024-12-14 12:44:32.425225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:32.766 [2024-12-14 12:44:32.425234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:18:32.766 [2024-12-14 12:44:32.425240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.766 [2024-12-14 12:44:32.434991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.766 [2024-12-14 12:44:32.435017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:32.766 [2024-12-14 12:44:32.435026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.721 ms 00:18:32.766 [2024-12-14 12:44:32.435032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.766 [2024-12-14 12:44:32.435346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.766 [2024-12-14 12:44:32.435358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:32.766 [2024-12-14 12:44:32.435366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:18:32.766 [2024-12-14 12:44:32.435372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.766 [2024-12-14 12:44:32.471954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.766 [2024-12-14 12:44:32.471979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:32.766 [2024-12-14 12:44:32.471989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.766 [2024-12-14 12:44:32.471995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.766 [2024-12-14 12:44:32.472049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.766 [2024-12-14 12:44:32.472067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:32.766 [2024-12-14 12:44:32.472076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.766 [2024-12-14 12:44:32.472082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.766 [2024-12-14 12:44:32.472160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.766 [2024-12-14 12:44:32.472171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:32.766 [2024-12-14 12:44:32.472179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.766 [2024-12-14 12:44:32.472185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.766 [2024-12-14 12:44:32.472211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.766 [2024-12-14 12:44:32.472218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:32.766 [2024-12-14 12:44:32.472225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.766 [2024-12-14 12:44:32.472231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.025 [2024-12-14 12:44:32.538180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.025 [2024-12-14 12:44:32.538341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.025 [2024-12-14 12:44:32.538357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.025 [2024-12-14 12:44:32.538364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.025 [2024-12-14 12:44:32.588926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.025 [2024-12-14 12:44:32.588962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.025 [2024-12-14 12:44:32.588973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.025 [2024-12-14 12:44:32.588980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.025 [2024-12-14 12:44:32.589081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.025 [2024-12-14 12:44:32.589090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:33.025 [2024-12-14 12:44:32.589101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.025 [2024-12-14 12:44:32.589107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.025 [2024-12-14 12:44:32.589166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.025 [2024-12-14 12:44:32.589174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:33.025 [2024-12-14 12:44:32.589182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.025 [2024-12-14 12:44:32.589188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.025 [2024-12-14 12:44:32.589289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.025 [2024-12-14 12:44:32.589296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:33.025 [2024-12-14 12:44:32.589305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.025 [2024-12-14 12:44:32.589313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.025 [2024-12-14 12:44:32.589357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.025 [2024-12-14 12:44:32.589364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:33.025 [2024-12-14 12:44:32.589372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.025 [2024-12-14 12:44:32.589379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.025 [2024-12-14 12:44:32.589424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.025 [2024-12-14 12:44:32.589432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:33.025 [2024-12-14 12:44:32.589440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.025 [2024-12-14 12:44:32.589445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.025 [2024-12-14 12:44:32.589496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.025 [2024-12-14 12:44:32.589504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:33.025 [2024-12-14 12:44:32.589512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.025 [2024-12-14 12:44:32.589519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.025 [2024-12-14 12:44:32.589674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 282.358 ms, result 0 00:18:33.025 true 00:18:33.025 12:44:32 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76887 00:18:33.025 12:44:32 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76887 ']' 00:18:33.025 12:44:32 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76887 00:18:33.025 12:44:32 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:33.025 12:44:32 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.025 12:44:32 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76887 00:18:33.025 killing process with pid 76887 00:18:33.025 12:44:32 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.025 12:44:32 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.025 12:44:32 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76887' 00:18:33.025 12:44:32 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76887 00:18:33.025 12:44:32 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76887 00:18:43.089 12:44:42 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:43.089 12:44:42 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:43.089 12:44:42 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:43.089 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.089 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:43.089 12:44:42 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:43.089 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:43.089 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:43.089 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:43.089 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:43.089 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.089 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:43.089 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:43.090 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:43.090 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:43.090 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:43.090 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:43.090 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:43.090 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:43.090 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:43.090 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:43.090 12:44:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:43.090 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:43.090 fio-3.35 00:18:43.090 Starting 1 thread 00:18:49.737 00:18:49.737 test: (groupid=0, jobs=1): err= 0: pid=77076: Sat Dec 14 12:44:48 2024 00:18:49.737 read: IOPS=784, BW=52.1MiB/s (54.6MB/s)(255MiB/4886msec) 00:18:49.737 slat (nsec): min=4183, max=40785, avg=7571.93, stdev=3915.86 00:18:49.737 clat (usec): min=275, max=1530, avg=579.27, stdev=223.73 00:18:49.737 lat (usec): min=280, max=1539, avg=586.85, stdev=225.21 00:18:49.737 clat percentiles (usec): 00:18:49.737 | 1.00th=[ 318], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 334], 00:18:49.737 | 30.00th=[ 355], 40.00th=[ 478], 50.00th=[ 570], 60.00th=[ 603], 00:18:49.737 | 70.00th=[ 725], 80.00th=[ 816], 90.00th=[ 898], 95.00th=[ 947], 00:18:49.737 | 99.00th=[ 1090], 99.50th=[ 1156], 99.90th=[ 1467], 99.95th=[ 1516], 00:18:49.737 | 99.99th=[ 1532] 00:18:49.737 write: IOPS=790, BW=52.5MiB/s (55.0MB/s)(256MiB/4879msec); 0 zone resets 00:18:49.737 slat (nsec): min=14966, max=84605, avg=21695.39, stdev=5442.15 00:18:49.737 clat (usec): min=304, max=2108, avg=648.92, stdev=271.14 00:18:49.737 lat (usec): min=325, max=2136, avg=670.62, stdev=273.41 00:18:49.737 clat percentiles (usec): 00:18:49.737 | 1.00th=[ 343], 5.00th=[ 351], 10.00th=[ 351], 20.00th=[ 359], 00:18:49.737 | 30.00th=[ 383], 40.00th=[ 570], 50.00th=[ 644], 60.00th=[ 693], 00:18:49.737 | 70.00th=[ 775], 80.00th=[ 857], 90.00th=[ 979], 95.00th=[ 1037], 00:18:49.737 | 99.00th=[ 1713], 99.50th=[ 1795], 99.90th=[ 1958], 99.95th=[ 2114], 00:18:49.737 | 99.99th=[ 2114] 00:18:49.737 bw ( KiB/s): min=35224, max=89080, per=100.00%, avg=54309.33, stdev=19823.49, samples=9 00:18:49.737 iops : min= 518, max= 1310, avg=798.67, stdev=291.52, samples=9 00:18:49.737 lat (usec) : 500=37.98%, 750=32.14%, 1000=25.10% 00:18:49.737 lat (msec) : 2=4.75%, 4=0.04% 00:18:49.737 cpu : usr=99.06%, sys=0.12%, ctx=9, majf=0, minf=1167 00:18:49.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:49.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.737 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:49.737 00:18:49.737 Run status group 0 (all jobs): 00:18:49.737 READ: bw=52.1MiB/s (54.6MB/s), 52.1MiB/s-52.1MiB/s (54.6MB/s-54.6MB/s), io=255MiB (267MB), run=4886-4886msec 00:18:49.737 WRITE: bw=52.5MiB/s (55.0MB/s), 52.5MiB/s-52.5MiB/s (55.0MB/s-55.0MB/s), io=256MiB (269MB), run=4879-4879msec 00:18:50.678 ----------------------------------------------------- 00:18:50.678 Suppressions used: 00:18:50.678 count bytes template 00:18:50.678 1 5 /usr/src/fio/parse.c 00:18:50.678 1 8 libtcmalloc_minimal.so 00:18:50.678 1 904 libcrypto.so 00:18:50.678 ----------------------------------------------------- 00:18:50.678 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:50.678 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:50.938 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:50.938 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:50.939 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:50.939 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:50.939 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:50.939 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:50.939 12:44:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:50.939 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:50.939 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:50.939 fio-3.35 00:18:50.939 Starting 2 threads 00:19:17.489 00:19:17.489 first_half: (groupid=0, jobs=1): err= 0: pid=77186: Sat Dec 14 12:45:13 2024 00:19:17.489 read: IOPS=2980, BW=11.6MiB/s (12.2MB/s)(256MiB/21971msec) 00:19:17.489 slat (nsec): min=3077, max=28904, avg=4787.42, stdev=1268.26 00:19:17.489 clat (msec): min=9, max=257, avg=36.60, stdev=20.52 00:19:17.489 lat (msec): min=9, max=257, avg=36.61, stdev=20.52 00:19:17.489 clat percentiles (msec): 00:19:17.489 | 1.00th=[ 26], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:19:17.489 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 32], 00:19:17.489 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 42], 95.00th=[ 64], 00:19:17.489 | 99.00th=[ 150], 99.50th=[ 161], 99.90th=[ 188], 99.95th=[ 192], 00:19:17.489 | 99.99th=[ 228] 00:19:17.489 write: IOPS=2999, BW=11.7MiB/s (12.3MB/s)(256MiB/21851msec); 0 zone resets 00:19:17.489 slat (usec): min=3, max=937, avg= 6.41, stdev= 7.06 00:19:17.489 clat (usec): min=381, max=29612, avg=6317.20, stdev=4948.62 00:19:17.489 lat (usec): min=389, max=29619, avg=6323.61, stdev=4949.42 00:19:17.489 clat percentiles (usec): 00:19:17.489 | 1.00th=[ 791], 5.00th=[ 1729], 10.00th=[ 2474], 20.00th=[ 3097], 00:19:17.489 | 30.00th=[ 3720], 40.00th=[ 4359], 50.00th=[ 5014], 60.00th=[ 5473], 00:19:17.489 | 70.00th=[ 5735], 80.00th=[ 7046], 90.00th=[14877], 95.00th=[18482], 00:19:17.489 | 99.00th=[22676], 99.50th=[24511], 99.90th=[27919], 99.95th=[28705], 00:19:17.489 | 99.99th=[29492] 00:19:17.489 bw ( KiB/s): min= 552, max=42080, per=100.00%, avg=26021.20, stdev=13084.59, samples=20 00:19:17.489 iops : min= 138, max=10520, avg=6505.30, stdev=3271.15, samples=20 00:19:17.489 lat (usec) : 500=0.03%, 750=0.35%, 1000=0.75% 00:19:17.489 lat (msec) : 2=1.92%, 4=14.19%, 10=24.87%, 20=6.40%, 50=48.15% 00:19:17.489 lat (msec) : 100=1.85%, 250=1.49%, 500=0.01% 00:19:17.489 cpu : usr=99.26%, sys=0.11%, ctx=81, majf=0, minf=5579 00:19:17.489 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:17.489 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.489 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.489 issued rwts: total=65483,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.489 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.489 second_half: (groupid=0, jobs=1): err= 0: pid=77187: Sat Dec 14 12:45:13 2024 00:19:17.489 read: IOPS=2961, BW=11.6MiB/s (12.1MB/s)(256MiB/22112msec) 00:19:17.489 slat (nsec): min=3174, max=32637, avg=5001.02, stdev=1435.70 00:19:17.489 clat (msec): min=6, max=384, avg=36.21, stdev=23.96 00:19:17.489 lat (msec): min=6, max=384, avg=36.21, stdev=23.96 00:19:17.489 clat percentiles (msec): 00:19:17.489 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 30], 20.00th=[ 31], 00:19:17.489 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:19:17.489 | 70.00th=[ 33], 80.00th=[ 36], 90.00th=[ 40], 95.00th=[ 73], 00:19:17.489 | 99.00th=[ 150], 99.50th=[ 161], 99.90th=[ 279], 99.95th=[ 338], 00:19:17.489 | 99.99th=[ 376] 00:19:17.489 write: IOPS=3091, BW=12.1MiB/s (12.7MB/s)(256MiB/21199msec); 0 zone resets 00:19:17.489 slat (usec): min=3, max=816, avg= 6.47, stdev= 5.15 00:19:17.489 clat (usec): min=348, max=59151, avg=6990.88, stdev=7636.53 00:19:17.490 lat (usec): min=355, max=59156, avg=6997.36, stdev=7637.40 00:19:17.490 clat percentiles (usec): 00:19:17.490 | 1.00th=[ 725], 5.00th=[ 865], 10.00th=[ 1156], 20.00th=[ 2343], 00:19:17.490 | 30.00th=[ 2999], 40.00th=[ 3851], 50.00th=[ 4883], 60.00th=[ 5473], 00:19:17.490 | 70.00th=[ 5997], 80.00th=[ 8717], 90.00th=[18220], 95.00th=[23200], 00:19:17.490 | 99.00th=[34866], 99.50th=[44303], 99.90th=[54789], 99.95th=[56361], 00:19:17.490 | 99.99th=[58983] 00:19:17.490 bw ( KiB/s): min= 1040, max=42664, per=100.00%, avg=24966.10, stdev=12120.19, samples=21 00:19:17.490 iops : min= 260, max=10666, avg=6241.52, stdev=3030.05, samples=21 00:19:17.490 lat (usec) : 500=0.03%, 750=0.74%, 1000=3.30% 00:19:17.490 lat (msec) : 2=4.17%, 4=12.56%, 10=21.67%, 20=5.58%, 50=48.50% 00:19:17.490 lat (msec) : 100=1.69%, 250=1.71%, 500=0.06% 00:19:17.490 cpu : usr=99.34%, sys=0.12%, ctx=28, majf=0, minf=5524 00:19:17.490 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:17.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.490 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.490 issued rwts: total=65481,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.490 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.490 00:19:17.490 Run status group 0 (all jobs): 00:19:17.490 READ: bw=23.1MiB/s (24.3MB/s), 11.6MiB/s-11.6MiB/s (12.1MB/s-12.2MB/s), io=512MiB (536MB), run=21971-22112msec 00:19:17.490 WRITE: bw=23.4MiB/s (24.6MB/s), 11.7MiB/s-12.1MiB/s (12.3MB/s-12.7MB/s), io=512MiB (537MB), run=21199-21851msec 00:19:17.490 ----------------------------------------------------- 00:19:17.490 Suppressions used: 00:19:17.490 count bytes template 00:19:17.490 2 10 /usr/src/fio/parse.c 00:19:17.490 3 288 /usr/src/fio/iolog.c 00:19:17.490 1 8 libtcmalloc_minimal.so 00:19:17.490 1 904 libcrypto.so 00:19:17.490 ----------------------------------------------------- 00:19:17.490 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:17.490 12:45:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:17.490 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:17.490 fio-3.35 00:19:17.490 Starting 1 thread 00:19:32.400 00:19:32.400 test: (groupid=0, jobs=1): err= 0: pid=77484: Sat Dec 14 12:45:31 2024 00:19:32.400 read: IOPS=8109, BW=31.7MiB/s (33.2MB/s)(255MiB/8040msec) 00:19:32.400 slat (nsec): min=3122, max=19831, avg=3980.47, stdev=1030.19 00:19:32.400 clat (usec): min=534, max=30728, avg=15775.74, stdev=1523.70 00:19:32.400 lat (usec): min=538, max=30731, avg=15779.72, stdev=1523.84 00:19:32.400 clat percentiles (usec): 00:19:32.400 | 1.00th=[14615], 5.00th=[14877], 10.00th=[15008], 20.00th=[15139], 00:19:32.400 | 30.00th=[15270], 40.00th=[15270], 50.00th=[15401], 60.00th=[15533], 00:19:32.400 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16581], 95.00th=[19006], 00:19:32.400 | 99.00th=[22938], 99.50th=[23462], 99.90th=[24249], 99.95th=[26870], 00:19:32.400 | 99.99th=[30016] 00:19:32.400 write: IOPS=11.4k, BW=44.6MiB/s (46.8MB/s)(256MiB/5740msec); 0 zone resets 00:19:32.400 slat (usec): min=4, max=383, avg= 7.77, stdev= 4.34 00:19:32.400 clat (usec): min=477, max=56148, avg=11166.09, stdev=11317.02 00:19:32.400 lat (usec): min=483, max=56154, avg=11173.86, stdev=11317.30 00:19:32.400 clat percentiles (usec): 00:19:32.400 | 1.00th=[ 627], 5.00th=[ 791], 10.00th=[ 898], 20.00th=[ 1090], 00:19:32.400 | 30.00th=[ 1319], 40.00th=[ 1713], 50.00th=[ 9896], 60.00th=[12387], 00:19:32.400 | 70.00th=[14877], 80.00th=[17695], 90.00th=[28181], 95.00th=[36963], 00:19:32.400 | 99.00th=[40633], 99.50th=[43254], 99.90th=[45876], 99.95th=[46924], 00:19:32.400 | 99.99th=[54264] 00:19:32.400 bw ( KiB/s): min=20264, max=64568, per=95.67%, avg=43690.67, stdev=12059.22, samples=12 00:19:32.400 iops : min= 5066, max=16142, avg=10922.67, stdev=3014.81, samples=12 00:19:32.400 lat (usec) : 500=0.01%, 750=1.86%, 1000=5.95% 00:19:32.400 lat (msec) : 2=12.73%, 4=0.60%, 10=4.31%, 20=64.26%, 50=10.29% 00:19:32.400 lat (msec) : 100=0.01% 00:19:32.400 cpu : usr=99.09%, sys=0.20%, ctx=21, majf=0, minf=5563 00:19:32.400 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:32.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:32.400 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:32.400 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:32.400 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:32.400 00:19:32.400 Run status group 0 (all jobs): 00:19:32.400 READ: bw=31.7MiB/s (33.2MB/s), 31.7MiB/s-31.7MiB/s (33.2MB/s-33.2MB/s), io=255MiB (267MB), run=8040-8040msec 00:19:32.400 WRITE: bw=44.6MiB/s (46.8MB/s), 44.6MiB/s-44.6MiB/s (46.8MB/s-46.8MB/s), io=256MiB (268MB), run=5740-5740msec 00:19:33.784 ----------------------------------------------------- 00:19:33.784 Suppressions used: 00:19:33.784 count bytes template 00:19:33.784 1 5 /usr/src/fio/parse.c 00:19:33.784 2 192 /usr/src/fio/iolog.c 00:19:33.784 1 8 libtcmalloc_minimal.so 00:19:33.784 1 904 libcrypto.so 00:19:33.784 ----------------------------------------------------- 00:19:33.784 00:19:33.784 12:45:33 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:33.784 12:45:33 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:33.784 12:45:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:33.784 12:45:33 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:33.784 Remove shared memory files 00:19:33.784 12:45:33 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:33.784 12:45:33 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:33.784 12:45:33 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:33.784 12:45:33 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:33.784 12:45:33 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58906 /dev/shm/spdk_tgt_trace.pid75815 00:19:33.784 12:45:33 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:33.784 12:45:33 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:33.784 ************************************ 00:19:33.784 END TEST ftl_fio_basic 00:19:33.784 ************************************ 00:19:33.784 00:19:33.784 real 1m8.639s 00:19:33.784 user 2m30.617s 00:19:33.784 sys 0m3.138s 00:19:33.784 12:45:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:33.784 12:45:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:33.784 12:45:33 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:33.784 12:45:33 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:33.784 12:45:33 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.784 12:45:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:33.784 ************************************ 00:19:33.784 START TEST ftl_bdevperf 00:19:33.784 ************************************ 00:19:33.784 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:33.784 * Looking for test storage... 00:19:33.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:33.784 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:33.784 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:33.784 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:34.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.046 --rc genhtml_branch_coverage=1 00:19:34.046 --rc genhtml_function_coverage=1 00:19:34.046 --rc genhtml_legend=1 00:19:34.046 --rc geninfo_all_blocks=1 00:19:34.046 --rc geninfo_unexecuted_blocks=1 00:19:34.046 00:19:34.046 ' 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:34.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.046 --rc genhtml_branch_coverage=1 00:19:34.046 --rc genhtml_function_coverage=1 00:19:34.046 --rc genhtml_legend=1 00:19:34.046 --rc geninfo_all_blocks=1 00:19:34.046 --rc geninfo_unexecuted_blocks=1 00:19:34.046 00:19:34.046 ' 00:19:34.046 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:34.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.046 --rc genhtml_branch_coverage=1 00:19:34.046 --rc genhtml_function_coverage=1 00:19:34.046 --rc genhtml_legend=1 00:19:34.046 --rc geninfo_all_blocks=1 00:19:34.047 --rc geninfo_unexecuted_blocks=1 00:19:34.047 00:19:34.047 ' 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:34.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.047 --rc genhtml_branch_coverage=1 00:19:34.047 --rc genhtml_function_coverage=1 00:19:34.047 --rc genhtml_legend=1 00:19:34.047 --rc geninfo_all_blocks=1 00:19:34.047 --rc geninfo_unexecuted_blocks=1 00:19:34.047 00:19:34.047 ' 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77728 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77728 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77728 ']' 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.047 12:45:33 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:34.047 [2024-12-14 12:45:33.688298] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:34.047 [2024-12-14 12:45:33.688647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77728 ] 00:19:34.308 [2024-12-14 12:45:33.857877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.308 [2024-12-14 12:45:33.980993] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.879 12:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.879 12:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:34.879 12:45:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:34.879 12:45:34 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:34.879 12:45:34 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:34.879 12:45:34 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:34.879 12:45:34 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:34.879 12:45:34 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:35.140 12:45:34 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:35.140 12:45:34 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:35.140 12:45:34 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:35.140 12:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:35.140 12:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:35.140 12:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:35.140 12:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:35.140 12:45:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:35.400 12:45:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:35.400 { 00:19:35.400 "name": "nvme0n1", 00:19:35.400 "aliases": [ 00:19:35.400 "bd51eaba-ed72-41df-b861-70dc9b7a16f9" 00:19:35.400 ], 00:19:35.400 "product_name": "NVMe disk", 00:19:35.400 "block_size": 4096, 00:19:35.400 "num_blocks": 1310720, 00:19:35.400 "uuid": "bd51eaba-ed72-41df-b861-70dc9b7a16f9", 00:19:35.400 "numa_id": -1, 00:19:35.400 "assigned_rate_limits": { 00:19:35.400 "rw_ios_per_sec": 0, 00:19:35.400 "rw_mbytes_per_sec": 0, 00:19:35.400 "r_mbytes_per_sec": 0, 00:19:35.400 "w_mbytes_per_sec": 0 00:19:35.400 }, 00:19:35.400 "claimed": true, 00:19:35.400 "claim_type": "read_many_write_one", 00:19:35.400 "zoned": false, 00:19:35.400 "supported_io_types": { 00:19:35.400 "read": true, 00:19:35.400 "write": true, 00:19:35.400 "unmap": true, 00:19:35.400 "flush": true, 00:19:35.400 "reset": true, 00:19:35.400 "nvme_admin": true, 00:19:35.400 "nvme_io": true, 00:19:35.400 "nvme_io_md": false, 00:19:35.400 "write_zeroes": true, 00:19:35.400 "zcopy": false, 00:19:35.400 "get_zone_info": false, 00:19:35.400 "zone_management": false, 00:19:35.400 "zone_append": false, 00:19:35.400 "compare": true, 00:19:35.400 "compare_and_write": false, 00:19:35.400 "abort": true, 00:19:35.400 "seek_hole": false, 00:19:35.400 "seek_data": false, 00:19:35.400 "copy": true, 00:19:35.400 "nvme_iov_md": false 00:19:35.400 }, 00:19:35.400 "driver_specific": { 00:19:35.400 "nvme": [ 00:19:35.400 { 00:19:35.400 "pci_address": "0000:00:11.0", 00:19:35.400 "trid": { 00:19:35.400 "trtype": "PCIe", 00:19:35.400 "traddr": "0000:00:11.0" 00:19:35.400 }, 00:19:35.400 "ctrlr_data": { 00:19:35.400 "cntlid": 0, 00:19:35.400 "vendor_id": "0x1b36", 00:19:35.400 "model_number": "QEMU NVMe Ctrl", 00:19:35.400 "serial_number": "12341", 00:19:35.400 "firmware_revision": "8.0.0", 00:19:35.400 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:35.400 "oacs": { 00:19:35.400 "security": 0, 00:19:35.400 "format": 1, 00:19:35.400 "firmware": 0, 00:19:35.400 "ns_manage": 1 00:19:35.400 }, 00:19:35.400 "multi_ctrlr": false, 00:19:35.400 "ana_reporting": false 00:19:35.400 }, 00:19:35.400 "vs": { 00:19:35.400 "nvme_version": "1.4" 00:19:35.400 }, 00:19:35.400 "ns_data": { 00:19:35.400 "id": 1, 00:19:35.400 "can_share": false 00:19:35.400 } 00:19:35.400 } 00:19:35.400 ], 00:19:35.400 "mp_policy": "active_passive" 00:19:35.400 } 00:19:35.400 } 00:19:35.400 ]' 00:19:35.400 12:45:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:35.400 12:45:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:35.400 12:45:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:35.661 12:45:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:35.661 12:45:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:35.661 12:45:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:35.661 12:45:35 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:35.661 12:45:35 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:35.661 12:45:35 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:35.661 12:45:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:35.661 12:45:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:35.661 12:45:35 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=95cb178f-64a0-4bfb-9f9f-f831abdc3f53 00:19:35.661 12:45:35 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:35.661 12:45:35 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 95cb178f-64a0-4bfb-9f9f-f831abdc3f53 00:19:35.921 12:45:35 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:36.182 12:45:35 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=52894748-ca45-4566-9b93-c5351797e5ba 00:19:36.182 12:45:35 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 52894748-ca45-4566-9b93-c5351797e5ba 00:19:36.444 12:45:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=2a12b6e4-0ea7-4901-a86e-114180f19855 00:19:36.444 12:45:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2a12b6e4-0ea7-4901-a86e-114180f19855 00:19:36.444 12:45:36 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:36.444 12:45:36 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:36.444 12:45:36 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=2a12b6e4-0ea7-4901-a86e-114180f19855 00:19:36.444 12:45:36 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:36.444 12:45:36 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 2a12b6e4-0ea7-4901-a86e-114180f19855 00:19:36.444 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2a12b6e4-0ea7-4901-a86e-114180f19855 00:19:36.444 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:36.444 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:36.444 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:36.444 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2a12b6e4-0ea7-4901-a86e-114180f19855 00:19:36.704 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:36.704 { 00:19:36.704 "name": "2a12b6e4-0ea7-4901-a86e-114180f19855", 00:19:36.704 "aliases": [ 00:19:36.704 "lvs/nvme0n1p0" 00:19:36.704 ], 00:19:36.704 "product_name": "Logical Volume", 00:19:36.704 "block_size": 4096, 00:19:36.704 "num_blocks": 26476544, 00:19:36.704 "uuid": "2a12b6e4-0ea7-4901-a86e-114180f19855", 00:19:36.704 "assigned_rate_limits": { 00:19:36.704 "rw_ios_per_sec": 0, 00:19:36.704 "rw_mbytes_per_sec": 0, 00:19:36.704 "r_mbytes_per_sec": 0, 00:19:36.704 "w_mbytes_per_sec": 0 00:19:36.704 }, 00:19:36.704 "claimed": false, 00:19:36.704 "zoned": false, 00:19:36.704 "supported_io_types": { 00:19:36.705 "read": true, 00:19:36.705 "write": true, 00:19:36.705 "unmap": true, 00:19:36.705 "flush": false, 00:19:36.705 "reset": true, 00:19:36.705 "nvme_admin": false, 00:19:36.705 "nvme_io": false, 00:19:36.705 "nvme_io_md": false, 00:19:36.705 "write_zeroes": true, 00:19:36.705 "zcopy": false, 00:19:36.705 "get_zone_info": false, 00:19:36.705 "zone_management": false, 00:19:36.705 "zone_append": false, 00:19:36.705 "compare": false, 00:19:36.705 "compare_and_write": false, 00:19:36.705 "abort": false, 00:19:36.705 "seek_hole": true, 00:19:36.705 "seek_data": true, 00:19:36.705 "copy": false, 00:19:36.705 "nvme_iov_md": false 00:19:36.705 }, 00:19:36.705 "driver_specific": { 00:19:36.705 "lvol": { 00:19:36.705 "lvol_store_uuid": "52894748-ca45-4566-9b93-c5351797e5ba", 00:19:36.705 "base_bdev": "nvme0n1", 00:19:36.705 "thin_provision": true, 00:19:36.705 "num_allocated_clusters": 0, 00:19:36.705 "snapshot": false, 00:19:36.705 "clone": false, 00:19:36.705 "esnap_clone": false 00:19:36.705 } 00:19:36.705 } 00:19:36.705 } 00:19:36.705 ]' 00:19:36.705 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:36.705 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:36.705 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:36.705 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:36.705 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:36.705 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:36.705 12:45:36 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:36.705 12:45:36 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:36.705 12:45:36 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:36.965 12:45:36 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:36.965 12:45:36 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:36.966 12:45:36 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 2a12b6e4-0ea7-4901-a86e-114180f19855 00:19:36.966 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2a12b6e4-0ea7-4901-a86e-114180f19855 00:19:36.966 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:36.966 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:36.966 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:36.966 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2a12b6e4-0ea7-4901-a86e-114180f19855 00:19:37.224 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:37.224 { 00:19:37.224 "name": "2a12b6e4-0ea7-4901-a86e-114180f19855", 00:19:37.224 "aliases": [ 00:19:37.224 "lvs/nvme0n1p0" 00:19:37.225 ], 00:19:37.225 "product_name": "Logical Volume", 00:19:37.225 "block_size": 4096, 00:19:37.225 "num_blocks": 26476544, 00:19:37.225 "uuid": "2a12b6e4-0ea7-4901-a86e-114180f19855", 00:19:37.225 "assigned_rate_limits": { 00:19:37.225 "rw_ios_per_sec": 0, 00:19:37.225 "rw_mbytes_per_sec": 0, 00:19:37.225 "r_mbytes_per_sec": 0, 00:19:37.225 "w_mbytes_per_sec": 0 00:19:37.225 }, 00:19:37.225 "claimed": false, 00:19:37.225 "zoned": false, 00:19:37.225 "supported_io_types": { 00:19:37.225 "read": true, 00:19:37.225 "write": true, 00:19:37.225 "unmap": true, 00:19:37.225 "flush": false, 00:19:37.225 "reset": true, 00:19:37.225 "nvme_admin": false, 00:19:37.225 "nvme_io": false, 00:19:37.225 "nvme_io_md": false, 00:19:37.225 "write_zeroes": true, 00:19:37.225 "zcopy": false, 00:19:37.225 "get_zone_info": false, 00:19:37.225 "zone_management": false, 00:19:37.225 "zone_append": false, 00:19:37.225 "compare": false, 00:19:37.225 "compare_and_write": false, 00:19:37.225 "abort": false, 00:19:37.225 "seek_hole": true, 00:19:37.225 "seek_data": true, 00:19:37.225 "copy": false, 00:19:37.225 "nvme_iov_md": false 00:19:37.225 }, 00:19:37.225 "driver_specific": { 00:19:37.225 "lvol": { 00:19:37.225 "lvol_store_uuid": "52894748-ca45-4566-9b93-c5351797e5ba", 00:19:37.225 "base_bdev": "nvme0n1", 00:19:37.225 "thin_provision": true, 00:19:37.225 "num_allocated_clusters": 0, 00:19:37.225 "snapshot": false, 00:19:37.225 "clone": false, 00:19:37.225 "esnap_clone": false 00:19:37.225 } 00:19:37.225 } 00:19:37.225 } 00:19:37.225 ]' 00:19:37.225 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:37.225 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:37.225 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:37.225 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:37.225 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:37.225 12:45:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:37.225 12:45:36 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:37.225 12:45:36 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:37.484 12:45:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:37.484 12:45:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 2a12b6e4-0ea7-4901-a86e-114180f19855 00:19:37.484 12:45:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2a12b6e4-0ea7-4901-a86e-114180f19855 00:19:37.484 12:45:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:37.484 12:45:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:37.484 12:45:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:37.484 12:45:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2a12b6e4-0ea7-4901-a86e-114180f19855 00:19:37.745 12:45:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:37.745 { 00:19:37.745 "name": "2a12b6e4-0ea7-4901-a86e-114180f19855", 00:19:37.745 "aliases": [ 00:19:37.745 "lvs/nvme0n1p0" 00:19:37.745 ], 00:19:37.745 "product_name": "Logical Volume", 00:19:37.745 "block_size": 4096, 00:19:37.745 "num_blocks": 26476544, 00:19:37.745 "uuid": "2a12b6e4-0ea7-4901-a86e-114180f19855", 00:19:37.745 "assigned_rate_limits": { 00:19:37.745 "rw_ios_per_sec": 0, 00:19:37.745 "rw_mbytes_per_sec": 0, 00:19:37.745 "r_mbytes_per_sec": 0, 00:19:37.745 "w_mbytes_per_sec": 0 00:19:37.745 }, 00:19:37.745 "claimed": false, 00:19:37.745 "zoned": false, 00:19:37.745 "supported_io_types": { 00:19:37.745 "read": true, 00:19:37.745 "write": true, 00:19:37.745 "unmap": true, 00:19:37.745 "flush": false, 00:19:37.745 "reset": true, 00:19:37.745 "nvme_admin": false, 00:19:37.745 "nvme_io": false, 00:19:37.745 "nvme_io_md": false, 00:19:37.745 "write_zeroes": true, 00:19:37.745 "zcopy": false, 00:19:37.745 "get_zone_info": false, 00:19:37.745 "zone_management": false, 00:19:37.745 "zone_append": false, 00:19:37.745 "compare": false, 00:19:37.745 "compare_and_write": false, 00:19:37.745 "abort": false, 00:19:37.745 "seek_hole": true, 00:19:37.745 "seek_data": true, 00:19:37.745 "copy": false, 00:19:37.745 "nvme_iov_md": false 00:19:37.745 }, 00:19:37.745 "driver_specific": { 00:19:37.746 "lvol": { 00:19:37.746 "lvol_store_uuid": "52894748-ca45-4566-9b93-c5351797e5ba", 00:19:37.746 "base_bdev": "nvme0n1", 00:19:37.746 "thin_provision": true, 00:19:37.746 "num_allocated_clusters": 0, 00:19:37.746 "snapshot": false, 00:19:37.746 "clone": false, 00:19:37.746 "esnap_clone": false 00:19:37.746 } 00:19:37.746 } 00:19:37.746 } 00:19:37.746 ]' 00:19:37.746 12:45:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:37.746 12:45:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:37.746 12:45:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:37.746 12:45:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:37.746 12:45:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:37.746 12:45:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:37.746 12:45:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:37.746 12:45:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2a12b6e4-0ea7-4901-a86e-114180f19855 -c nvc0n1p0 --l2p_dram_limit 20 00:19:38.008 [2024-12-14 12:45:37.482385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.008 [2024-12-14 12:45:37.482426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:38.008 [2024-12-14 12:45:37.482438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:38.008 [2024-12-14 12:45:37.482446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.008 [2024-12-14 12:45:37.482487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.008 [2024-12-14 12:45:37.482496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.008 [2024-12-14 12:45:37.482502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:38.008 [2024-12-14 12:45:37.482510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.008 [2024-12-14 12:45:37.482523] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:38.008 [2024-12-14 12:45:37.483114] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:38.008 [2024-12-14 12:45:37.483127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.008 [2024-12-14 12:45:37.483134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.008 [2024-12-14 12:45:37.483141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:19:38.008 [2024-12-14 12:45:37.483147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.008 [2024-12-14 12:45:37.483168] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 82e4c9f3-791c-44b6-923b-c9973ce7507d 00:19:38.008 [2024-12-14 12:45:37.484125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.008 [2024-12-14 12:45:37.484145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:38.008 [2024-12-14 12:45:37.484156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:38.008 [2024-12-14 12:45:37.484162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.008 [2024-12-14 12:45:37.488794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.008 [2024-12-14 12:45:37.488821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.008 [2024-12-14 12:45:37.488830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.575 ms 00:19:38.008 [2024-12-14 12:45:37.488838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.008 [2024-12-14 12:45:37.488901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.008 [2024-12-14 12:45:37.488909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.008 [2024-12-14 12:45:37.488919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:38.008 [2024-12-14 12:45:37.488924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.008 [2024-12-14 12:45:37.488953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.008 [2024-12-14 12:45:37.488960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:38.008 [2024-12-14 12:45:37.488967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:38.008 [2024-12-14 12:45:37.488973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.008 [2024-12-14 12:45:37.488990] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:38.008 [2024-12-14 12:45:37.491823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.008 [2024-12-14 12:45:37.491932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.008 [2024-12-14 12:45:37.491944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.841 ms 00:19:38.008 [2024-12-14 12:45:37.491953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.008 [2024-12-14 12:45:37.491980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.008 [2024-12-14 12:45:37.491987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:38.009 [2024-12-14 12:45:37.491994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:38.009 [2024-12-14 12:45:37.492001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.009 [2024-12-14 12:45:37.492017] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:38.009 [2024-12-14 12:45:37.492137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:38.009 [2024-12-14 12:45:37.492147] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:38.009 [2024-12-14 12:45:37.492157] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:38.009 [2024-12-14 12:45:37.492164] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:38.009 [2024-12-14 12:45:37.492174] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:38.009 [2024-12-14 12:45:37.492180] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:38.009 [2024-12-14 12:45:37.492187] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:38.009 [2024-12-14 12:45:37.492193] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:38.009 [2024-12-14 12:45:37.492199] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:38.009 [2024-12-14 12:45:37.492206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.009 [2024-12-14 12:45:37.492213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:38.009 [2024-12-14 12:45:37.492218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:19:38.009 [2024-12-14 12:45:37.492225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.009 [2024-12-14 12:45:37.492288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.009 [2024-12-14 12:45:37.492296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:38.009 [2024-12-14 12:45:37.492302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:38.009 [2024-12-14 12:45:37.492310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.009 [2024-12-14 12:45:37.492377] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:38.009 [2024-12-14 12:45:37.492387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:38.009 [2024-12-14 12:45:37.492393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.009 [2024-12-14 12:45:37.492400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.009 [2024-12-14 12:45:37.492406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:38.009 [2024-12-14 12:45:37.492412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:38.009 [2024-12-14 12:45:37.492417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:38.009 [2024-12-14 12:45:37.492423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:38.009 [2024-12-14 12:45:37.492428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:38.009 [2024-12-14 12:45:37.492436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.009 [2024-12-14 12:45:37.492441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:38.009 [2024-12-14 12:45:37.492452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:38.009 [2024-12-14 12:45:37.492458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:38.009 [2024-12-14 12:45:37.492465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:38.009 [2024-12-14 12:45:37.492469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:38.009 [2024-12-14 12:45:37.492479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.009 [2024-12-14 12:45:37.492484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:38.009 [2024-12-14 12:45:37.492491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:38.009 [2024-12-14 12:45:37.492496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.009 [2024-12-14 12:45:37.492502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:38.009 [2024-12-14 12:45:37.492507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:38.009 [2024-12-14 12:45:37.492513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.009 [2024-12-14 12:45:37.492518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:38.009 [2024-12-14 12:45:37.492525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:38.009 [2024-12-14 12:45:37.492529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.009 [2024-12-14 12:45:37.492536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:38.009 [2024-12-14 12:45:37.492540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:38.009 [2024-12-14 12:45:37.492546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.009 [2024-12-14 12:45:37.492552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:38.009 [2024-12-14 12:45:37.492558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:38.009 [2024-12-14 12:45:37.492562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:38.009 [2024-12-14 12:45:37.492569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:38.009 [2024-12-14 12:45:37.492574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:38.009 [2024-12-14 12:45:37.492582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.009 [2024-12-14 12:45:37.492587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:38.009 [2024-12-14 12:45:37.492592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:38.009 [2024-12-14 12:45:37.492597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:38.009 [2024-12-14 12:45:37.492603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:38.009 [2024-12-14 12:45:37.492608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:38.009 [2024-12-14 12:45:37.492615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.009 [2024-12-14 12:45:37.492619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:38.009 [2024-12-14 12:45:37.492626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:38.009 [2024-12-14 12:45:37.492630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.009 [2024-12-14 12:45:37.492636] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:38.009 [2024-12-14 12:45:37.492642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:38.009 [2024-12-14 12:45:37.492648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:38.009 [2024-12-14 12:45:37.492653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:38.009 [2024-12-14 12:45:37.492663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:38.009 [2024-12-14 12:45:37.492668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:38.009 [2024-12-14 12:45:37.492675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:38.009 [2024-12-14 12:45:37.492680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:38.009 [2024-12-14 12:45:37.492686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:38.009 [2024-12-14 12:45:37.492691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:38.009 [2024-12-14 12:45:37.492699] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:38.009 [2024-12-14 12:45:37.492706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.009 [2024-12-14 12:45:37.492713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:38.009 [2024-12-14 12:45:37.492719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:38.009 [2024-12-14 12:45:37.492725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:38.009 [2024-12-14 12:45:37.492731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:38.009 [2024-12-14 12:45:37.492738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:38.009 [2024-12-14 12:45:37.492743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:38.009 [2024-12-14 12:45:37.492751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:38.009 [2024-12-14 12:45:37.492756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:38.009 [2024-12-14 12:45:37.492764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:38.009 [2024-12-14 12:45:37.492769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:38.009 [2024-12-14 12:45:37.492776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:38.009 [2024-12-14 12:45:37.492781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:38.009 [2024-12-14 12:45:37.492788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:38.009 [2024-12-14 12:45:37.492793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:38.009 [2024-12-14 12:45:37.492800] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:38.009 [2024-12-14 12:45:37.492806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:38.009 [2024-12-14 12:45:37.492814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:38.009 [2024-12-14 12:45:37.492820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:38.009 [2024-12-14 12:45:37.492826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:38.009 [2024-12-14 12:45:37.492832] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:38.009 [2024-12-14 12:45:37.492839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.009 [2024-12-14 12:45:37.492845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:38.010 [2024-12-14 12:45:37.492851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:19:38.010 [2024-12-14 12:45:37.492857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.010 [2024-12-14 12:45:37.492895] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:38.010 [2024-12-14 12:45:37.492903] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:41.312 [2024-12-14 12:45:40.580199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.580265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:41.312 [2024-12-14 12:45:40.580284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3087.288 ms 00:19:41.312 [2024-12-14 12:45:40.580293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.312 [2024-12-14 12:45:40.608940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.608996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:41.312 [2024-12-14 12:45:40.609012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.418 ms 00:19:41.312 [2024-12-14 12:45:40.609021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.312 [2024-12-14 12:45:40.609182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.609194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:41.312 [2024-12-14 12:45:40.609207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:41.312 [2024-12-14 12:45:40.609216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.312 [2024-12-14 12:45:40.653313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.653385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:41.312 [2024-12-14 12:45:40.653401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.060 ms 00:19:41.312 [2024-12-14 12:45:40.653409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.312 [2024-12-14 12:45:40.653457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.653467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.312 [2024-12-14 12:45:40.653477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:41.312 [2024-12-14 12:45:40.653487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.312 [2024-12-14 12:45:40.654097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.654126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.312 [2024-12-14 12:45:40.654139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:19:41.312 [2024-12-14 12:45:40.654147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.312 [2024-12-14 12:45:40.654270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.654279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.312 [2024-12-14 12:45:40.654292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:41.312 [2024-12-14 12:45:40.654299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.312 [2024-12-14 12:45:40.669517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.669565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.312 [2024-12-14 12:45:40.669578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.194 ms 00:19:41.312 [2024-12-14 12:45:40.669595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.312 [2024-12-14 12:45:40.682902] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:41.312 [2024-12-14 12:45:40.690587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.690645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:41.312 [2024-12-14 12:45:40.690658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.896 ms 00:19:41.312 [2024-12-14 12:45:40.690669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.312 [2024-12-14 12:45:40.787406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.787484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:41.312 [2024-12-14 12:45:40.787499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.705 ms 00:19:41.312 [2024-12-14 12:45:40.787512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.312 [2024-12-14 12:45:40.787724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.787741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:41.312 [2024-12-14 12:45:40.787751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:19:41.312 [2024-12-14 12:45:40.787765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.312 [2024-12-14 12:45:40.814281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.814344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:41.312 [2024-12-14 12:45:40.814358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.459 ms 00:19:41.312 [2024-12-14 12:45:40.814368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.312 [2024-12-14 12:45:40.839942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.840002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:41.312 [2024-12-14 12:45:40.840015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.522 ms 00:19:41.312 [2024-12-14 12:45:40.840025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.312 [2024-12-14 12:45:40.840696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.840724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:41.312 [2024-12-14 12:45:40.840737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:19:41.312 [2024-12-14 12:45:40.840749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.312 [2024-12-14 12:45:40.914647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.312 [2024-12-14 12:45:40.914690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:41.312 [2024-12-14 12:45:40.914701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.854 ms 00:19:41.313 [2024-12-14 12:45:40.914710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.313 [2024-12-14 12:45:40.939557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.313 [2024-12-14 12:45:40.939598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:41.313 [2024-12-14 12:45:40.939611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.781 ms 00:19:41.313 [2024-12-14 12:45:40.939621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.313 [2024-12-14 12:45:40.963863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.313 [2024-12-14 12:45:40.963904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:41.313 [2024-12-14 12:45:40.963915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.207 ms 00:19:41.313 [2024-12-14 12:45:40.963924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.313 [2024-12-14 12:45:40.987834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.313 [2024-12-14 12:45:40.987872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:41.313 [2024-12-14 12:45:40.987882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.875 ms 00:19:41.313 [2024-12-14 12:45:40.987891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.313 [2024-12-14 12:45:40.987926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.313 [2024-12-14 12:45:40.987939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:41.313 [2024-12-14 12:45:40.987947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:41.313 [2024-12-14 12:45:40.987956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.313 [2024-12-14 12:45:40.988028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.313 [2024-12-14 12:45:40.988040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:41.313 [2024-12-14 12:45:40.988047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:41.313 [2024-12-14 12:45:40.988072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.313 [2024-12-14 12:45:40.988888] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3506.101 ms, result 0 00:19:41.313 { 00:19:41.313 "name": "ftl0", 00:19:41.313 "uuid": "82e4c9f3-791c-44b6-923b-c9973ce7507d" 00:19:41.313 } 00:19:41.313 12:45:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:41.313 12:45:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:41.313 12:45:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:41.574 12:45:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:41.835 [2024-12-14 12:45:41.317688] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:41.835 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:41.835 Zero copy mechanism will not be used. 00:19:41.835 Running I/O for 4 seconds... 00:19:43.720 751.00 IOPS, 49.87 MiB/s [2024-12-14T12:45:44.447Z] 803.50 IOPS, 53.36 MiB/s [2024-12-14T12:45:45.412Z] 824.00 IOPS, 54.72 MiB/s [2024-12-14T12:45:45.412Z] 834.50 IOPS, 55.42 MiB/s 00:19:45.675 Latency(us) 00:19:45.675 [2024-12-14T12:45:45.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:45.675 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:45.675 ftl0 : 4.00 834.28 55.40 0.00 0.00 1262.80 230.01 4511.90 00:19:45.675 [2024-12-14T12:45:45.412Z] =================================================================================================================== 00:19:45.675 [2024-12-14T12:45:45.412Z] Total : 834.28 55.40 0.00 0.00 1262.80 230.01 4511.90 00:19:45.675 [2024-12-14 12:45:45.329356] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:45.675 { 00:19:45.675 "results": [ 00:19:45.675 { 00:19:45.675 "job": "ftl0", 00:19:45.675 "core_mask": "0x1", 00:19:45.675 "workload": "randwrite", 00:19:45.675 "status": "finished", 00:19:45.675 "queue_depth": 1, 00:19:45.675 "io_size": 69632, 00:19:45.675 "runtime": 4.002275, 00:19:45.675 "iops": 834.2755058060728, 00:19:45.675 "mibps": 55.40110780743452, 00:19:45.675 "io_failed": 0, 00:19:45.675 "io_timeout": 0, 00:19:45.675 "avg_latency_us": 1262.8047642085378, 00:19:45.675 "min_latency_us": 230.00615384615384, 00:19:45.675 "max_latency_us": 4511.901538461539 00:19:45.675 } 00:19:45.675 ], 00:19:45.675 "core_count": 1 00:19:45.675 } 00:19:45.675 12:45:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:45.936 [2024-12-14 12:45:45.458221] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:45.936 Running I/O for 4 seconds... 00:19:47.821 5705.00 IOPS, 22.29 MiB/s [2024-12-14T12:45:48.501Z] 5074.50 IOPS, 19.82 MiB/s [2024-12-14T12:45:49.887Z] 4836.67 IOPS, 18.89 MiB/s [2024-12-14T12:45:49.887Z] 4762.50 IOPS, 18.60 MiB/s 00:19:50.150 Latency(us) 00:19:50.150 [2024-12-14T12:45:49.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.150 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:50.150 ftl0 : 4.04 4750.11 18.56 0.00 0.00 26820.73 370.22 50009.01 00:19:50.150 [2024-12-14T12:45:49.887Z] =================================================================================================================== 00:19:50.150 [2024-12-14T12:45:49.887Z] Total : 4750.11 18.56 0.00 0.00 26820.73 0.00 50009.01 00:19:50.150 [2024-12-14 12:45:49.505452] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:50.150 { 00:19:50.150 "results": [ 00:19:50.150 { 00:19:50.150 "job": "ftl0", 00:19:50.150 "core_mask": "0x1", 00:19:50.150 "workload": "randwrite", 00:19:50.150 "status": "finished", 00:19:50.150 "queue_depth": 128, 00:19:50.150 "io_size": 4096, 00:19:50.150 "runtime": 4.036751, 00:19:50.150 "iops": 4750.107202549773, 00:19:50.150 "mibps": 18.555106259960052, 00:19:50.150 "io_failed": 0, 00:19:50.150 "io_timeout": 0, 00:19:50.150 "avg_latency_us": 26820.734646956174, 00:19:50.150 "min_latency_us": 370.2153846153846, 00:19:50.150 "max_latency_us": 50009.00923076923 00:19:50.150 } 00:19:50.150 ], 00:19:50.150 "core_count": 1 00:19:50.150 } 00:19:50.150 12:45:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:50.150 [2024-12-14 12:45:49.626861] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:50.150 Running I/O for 4 seconds... 00:19:52.039 4294.00 IOPS, 16.77 MiB/s [2024-12-14T12:45:52.719Z] 4257.00 IOPS, 16.63 MiB/s [2024-12-14T12:45:53.663Z] 4240.67 IOPS, 16.57 MiB/s [2024-12-14T12:45:53.663Z] 4266.50 IOPS, 16.67 MiB/s 00:19:53.926 Latency(us) 00:19:53.926 [2024-12-14T12:45:53.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.926 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:53.926 Verification LBA range: start 0x0 length 0x1400000 00:19:53.926 ftl0 : 4.02 4279.18 16.72 0.00 0.00 29817.78 412.75 79449.80 00:19:53.926 [2024-12-14T12:45:53.663Z] =================================================================================================================== 00:19:53.926 [2024-12-14T12:45:53.663Z] Total : 4279.18 16.72 0.00 0.00 29817.78 0.00 79449.80 00:19:53.926 [2024-12-14 12:45:53.660617] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:54.187 { 00:19:54.187 "results": [ 00:19:54.187 { 00:19:54.187 "job": "ftl0", 00:19:54.187 "core_mask": "0x1", 00:19:54.187 "workload": "verify", 00:19:54.187 "status": "finished", 00:19:54.187 "verify_range": { 00:19:54.187 "start": 0, 00:19:54.187 "length": 20971520 00:19:54.187 }, 00:19:54.187 "queue_depth": 128, 00:19:54.187 "io_size": 4096, 00:19:54.187 "runtime": 4.016422, 00:19:54.187 "iops": 4279.1818190419235, 00:19:54.187 "mibps": 16.715553980632514, 00:19:54.187 "io_failed": 0, 00:19:54.188 "io_timeout": 0, 00:19:54.188 "avg_latency_us": 29817.776121666197, 00:19:54.188 "min_latency_us": 412.7507692307692, 00:19:54.188 "max_latency_us": 79449.79692307692 00:19:54.188 } 00:19:54.188 ], 00:19:54.188 "core_count": 1 00:19:54.188 } 00:19:54.188 12:45:53 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:54.188 [2024-12-14 12:45:53.879969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.188 [2024-12-14 12:45:53.880038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:54.188 [2024-12-14 12:45:53.880053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:54.188 [2024-12-14 12:45:53.880085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.188 [2024-12-14 12:45:53.880110] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:54.188 [2024-12-14 12:45:53.883137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.188 [2024-12-14 12:45:53.883178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:54.188 [2024-12-14 12:45:53.883192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.005 ms 00:19:54.188 [2024-12-14 12:45:53.883200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.188 [2024-12-14 12:45:53.886245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.188 [2024-12-14 12:45:53.886420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:54.188 [2024-12-14 12:45:53.886445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:19:54.188 [2024-12-14 12:45:53.886461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.449 [2024-12-14 12:45:54.100904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.449 [2024-12-14 12:45:54.100961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:54.449 [2024-12-14 12:45:54.100982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 214.412 ms 00:19:54.449 [2024-12-14 12:45:54.100990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.449 [2024-12-14 12:45:54.107182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.449 [2024-12-14 12:45:54.107227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:54.449 [2024-12-14 12:45:54.107242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.145 ms 00:19:54.449 [2024-12-14 12:45:54.107253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.449 [2024-12-14 12:45:54.133815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.449 [2024-12-14 12:45:54.133864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:54.449 [2024-12-14 12:45:54.133880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.462 ms 00:19:54.449 [2024-12-14 12:45:54.133888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.449 [2024-12-14 12:45:54.151305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.449 [2024-12-14 12:45:54.151359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:54.449 [2024-12-14 12:45:54.151374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.364 ms 00:19:54.449 [2024-12-14 12:45:54.151383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.449 [2024-12-14 12:45:54.151548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.449 [2024-12-14 12:45:54.151560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:54.449 [2024-12-14 12:45:54.151575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:19:54.449 [2024-12-14 12:45:54.151584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.449 [2024-12-14 12:45:54.177732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.449 [2024-12-14 12:45:54.177781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:54.449 [2024-12-14 12:45:54.177796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.126 ms 00:19:54.449 [2024-12-14 12:45:54.177803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.711 [2024-12-14 12:45:54.203295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.711 [2024-12-14 12:45:54.203486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:54.711 [2024-12-14 12:45:54.203513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.440 ms 00:19:54.711 [2024-12-14 12:45:54.203521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.711 [2024-12-14 12:45:54.228644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.711 [2024-12-14 12:45:54.228692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:54.711 [2024-12-14 12:45:54.228707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.007 ms 00:19:54.711 [2024-12-14 12:45:54.228714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.711 [2024-12-14 12:45:54.253981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.711 [2024-12-14 12:45:54.254026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:54.711 [2024-12-14 12:45:54.254043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.173 ms 00:19:54.711 [2024-12-14 12:45:54.254050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.711 [2024-12-14 12:45:54.254115] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:54.712 [2024-12-14 12:45:54.254132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:54.712 [2024-12-14 12:45:54.254957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:54.713 [2024-12-14 12:45:54.254967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:54.713 [2024-12-14 12:45:54.254974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:54.713 [2024-12-14 12:45:54.254983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:54.713 [2024-12-14 12:45:54.254991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:54.713 [2024-12-14 12:45:54.255003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:54.713 [2024-12-14 12:45:54.255014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:54.713 [2024-12-14 12:45:54.255024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:54.713 [2024-12-14 12:45:54.255033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:54.713 [2024-12-14 12:45:54.255044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:54.713 [2024-12-14 12:45:54.255053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:54.713 [2024-12-14 12:45:54.255076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:54.713 [2024-12-14 12:45:54.255092] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:54.713 [2024-12-14 12:45:54.255102] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 82e4c9f3-791c-44b6-923b-c9973ce7507d 00:19:54.713 [2024-12-14 12:45:54.255112] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:54.713 [2024-12-14 12:45:54.255122] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:54.713 [2024-12-14 12:45:54.255130] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:54.713 [2024-12-14 12:45:54.255140] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:54.713 [2024-12-14 12:45:54.255148] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:54.713 [2024-12-14 12:45:54.255158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:54.713 [2024-12-14 12:45:54.255166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:54.713 [2024-12-14 12:45:54.255177] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:54.713 [2024-12-14 12:45:54.255184] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:54.713 [2024-12-14 12:45:54.255194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.713 [2024-12-14 12:45:54.255202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:54.713 [2024-12-14 12:45:54.255214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.080 ms 00:19:54.713 [2024-12-14 12:45:54.255222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.713 [2024-12-14 12:45:54.269139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.713 [2024-12-14 12:45:54.269308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:54.713 [2024-12-14 12:45:54.269333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.871 ms 00:19:54.713 [2024-12-14 12:45:54.269341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.713 [2024-12-14 12:45:54.269761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.713 [2024-12-14 12:45:54.269779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:54.713 [2024-12-14 12:45:54.269791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:19:54.713 [2024-12-14 12:45:54.269798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.713 [2024-12-14 12:45:54.308949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.713 [2024-12-14 12:45:54.309147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.713 [2024-12-14 12:45:54.309174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.713 [2024-12-14 12:45:54.309185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.713 [2024-12-14 12:45:54.309264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.713 [2024-12-14 12:45:54.309274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.713 [2024-12-14 12:45:54.309284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.713 [2024-12-14 12:45:54.309293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.713 [2024-12-14 12:45:54.309398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.713 [2024-12-14 12:45:54.309409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.713 [2024-12-14 12:45:54.309421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.713 [2024-12-14 12:45:54.309429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.713 [2024-12-14 12:45:54.309447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.713 [2024-12-14 12:45:54.309456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.713 [2024-12-14 12:45:54.309466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.713 [2024-12-14 12:45:54.309474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.713 [2024-12-14 12:45:54.393714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.713 [2024-12-14 12:45:54.393769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.713 [2024-12-14 12:45:54.393787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.713 [2024-12-14 12:45:54.393796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.973 [2024-12-14 12:45:54.462902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.973 [2024-12-14 12:45:54.462962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.973 [2024-12-14 12:45:54.462977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.973 [2024-12-14 12:45:54.462985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.973 [2024-12-14 12:45:54.463114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.973 [2024-12-14 12:45:54.463126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.973 [2024-12-14 12:45:54.463138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.973 [2024-12-14 12:45:54.463146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.973 [2024-12-14 12:45:54.463217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.973 [2024-12-14 12:45:54.463229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.973 [2024-12-14 12:45:54.463241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.973 [2024-12-14 12:45:54.463251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.973 [2024-12-14 12:45:54.463357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.973 [2024-12-14 12:45:54.463370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.973 [2024-12-14 12:45:54.463385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.973 [2024-12-14 12:45:54.463393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.973 [2024-12-14 12:45:54.463429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.973 [2024-12-14 12:45:54.463438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:54.973 [2024-12-14 12:45:54.463449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.973 [2024-12-14 12:45:54.463456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.973 [2024-12-14 12:45:54.463500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.973 [2024-12-14 12:45:54.463512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.973 [2024-12-14 12:45:54.463524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.973 [2024-12-14 12:45:54.463540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.973 [2024-12-14 12:45:54.463588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.973 [2024-12-14 12:45:54.463599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.973 [2024-12-14 12:45:54.463609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.973 [2024-12-14 12:45:54.463618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.973 [2024-12-14 12:45:54.463765] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 583.745 ms, result 0 00:19:54.973 true 00:19:54.973 12:45:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77728 00:19:54.973 12:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77728 ']' 00:19:54.973 12:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77728 00:19:54.973 12:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:54.973 12:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.973 12:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77728 00:19:54.973 killing process with pid 77728 00:19:54.973 Received shutdown signal, test time was about 4.000000 seconds 00:19:54.973 00:19:54.973 Latency(us) 00:19:54.973 [2024-12-14T12:45:54.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:54.973 [2024-12-14T12:45:54.710Z] =================================================================================================================== 00:19:54.973 [2024-12-14T12:45:54.710Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:54.973 12:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:54.973 12:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:54.973 12:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77728' 00:19:54.973 12:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77728 00:19:54.973 12:45:54 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77728 00:19:55.919 Remove shared memory files 00:19:55.919 12:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:55.919 12:45:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:55.919 12:45:55 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:55.919 12:45:55 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:55.919 12:45:55 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:55.919 12:45:55 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:55.919 12:45:55 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:55.919 12:45:55 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:55.919 ************************************ 00:19:55.919 END TEST ftl_bdevperf 00:19:55.919 ************************************ 00:19:55.919 00:19:55.919 real 0m21.900s 00:19:55.919 user 0m24.471s 00:19:55.919 sys 0m1.005s 00:19:55.919 12:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:55.919 12:45:55 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:55.919 12:45:55 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:55.919 12:45:55 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:55.919 12:45:55 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:55.919 12:45:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:55.919 ************************************ 00:19:55.919 START TEST ftl_trim 00:19:55.919 ************************************ 00:19:55.919 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:55.919 * Looking for test storage... 00:19:55.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:55.919 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:55.919 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:55.919 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:55.919 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:55.919 12:45:55 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:55.919 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:55.919 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:55.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.919 --rc genhtml_branch_coverage=1 00:19:55.919 --rc genhtml_function_coverage=1 00:19:55.919 --rc genhtml_legend=1 00:19:55.919 --rc geninfo_all_blocks=1 00:19:55.919 --rc geninfo_unexecuted_blocks=1 00:19:55.919 00:19:55.919 ' 00:19:55.919 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:55.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.919 --rc genhtml_branch_coverage=1 00:19:55.919 --rc genhtml_function_coverage=1 00:19:55.919 --rc genhtml_legend=1 00:19:55.919 --rc geninfo_all_blocks=1 00:19:55.919 --rc geninfo_unexecuted_blocks=1 00:19:55.919 00:19:55.919 ' 00:19:55.919 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:55.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.919 --rc genhtml_branch_coverage=1 00:19:55.919 --rc genhtml_function_coverage=1 00:19:55.919 --rc genhtml_legend=1 00:19:55.919 --rc geninfo_all_blocks=1 00:19:55.919 --rc geninfo_unexecuted_blocks=1 00:19:55.919 00:19:55.919 ' 00:19:55.919 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:55.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:55.920 --rc genhtml_branch_coverage=1 00:19:55.920 --rc genhtml_function_coverage=1 00:19:55.920 --rc genhtml_legend=1 00:19:55.920 --rc geninfo_all_blocks=1 00:19:55.920 --rc geninfo_unexecuted_blocks=1 00:19:55.920 00:19:55.920 ' 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78080 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:55.920 12:45:55 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78080 00:19:55.920 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78080 ']' 00:19:55.920 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.920 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.920 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.920 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.920 12:45:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:56.181 [2024-12-14 12:45:55.670613] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:56.181 [2024-12-14 12:45:55.670964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78080 ] 00:19:56.181 [2024-12-14 12:45:55.838768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:56.441 [2024-12-14 12:45:55.965596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.441 [2024-12-14 12:45:55.966020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.441 [2024-12-14 12:45:55.966043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.014 12:45:56 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:57.014 12:45:56 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:57.014 12:45:56 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:57.014 12:45:56 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:57.014 12:45:56 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:57.014 12:45:56 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:57.014 12:45:56 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:57.014 12:45:56 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:57.275 12:45:57 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:57.275 12:45:57 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:57.275 12:45:57 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:57.275 12:45:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:57.537 12:45:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:57.537 12:45:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:57.537 12:45:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:57.537 12:45:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:57.537 12:45:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:57.537 { 00:19:57.537 "name": "nvme0n1", 00:19:57.537 "aliases": [ 00:19:57.537 "1c9a8416-8d63-441b-b104-a56954034cf6" 00:19:57.537 ], 00:19:57.537 "product_name": "NVMe disk", 00:19:57.537 "block_size": 4096, 00:19:57.537 "num_blocks": 1310720, 00:19:57.537 "uuid": "1c9a8416-8d63-441b-b104-a56954034cf6", 00:19:57.537 "numa_id": -1, 00:19:57.537 "assigned_rate_limits": { 00:19:57.537 "rw_ios_per_sec": 0, 00:19:57.537 "rw_mbytes_per_sec": 0, 00:19:57.537 "r_mbytes_per_sec": 0, 00:19:57.537 "w_mbytes_per_sec": 0 00:19:57.537 }, 00:19:57.537 "claimed": true, 00:19:57.537 "claim_type": "read_many_write_one", 00:19:57.537 "zoned": false, 00:19:57.537 "supported_io_types": { 00:19:57.537 "read": true, 00:19:57.537 "write": true, 00:19:57.537 "unmap": true, 00:19:57.537 "flush": true, 00:19:57.537 "reset": true, 00:19:57.537 "nvme_admin": true, 00:19:57.537 "nvme_io": true, 00:19:57.537 "nvme_io_md": false, 00:19:57.537 "write_zeroes": true, 00:19:57.537 "zcopy": false, 00:19:57.537 "get_zone_info": false, 00:19:57.537 "zone_management": false, 00:19:57.537 "zone_append": false, 00:19:57.537 "compare": true, 00:19:57.537 "compare_and_write": false, 00:19:57.537 "abort": true, 00:19:57.537 "seek_hole": false, 00:19:57.537 "seek_data": false, 00:19:57.537 "copy": true, 00:19:57.537 "nvme_iov_md": false 00:19:57.537 }, 00:19:57.537 "driver_specific": { 00:19:57.537 "nvme": [ 00:19:57.537 { 00:19:57.537 "pci_address": "0000:00:11.0", 00:19:57.537 "trid": { 00:19:57.537 "trtype": "PCIe", 00:19:57.537 "traddr": "0000:00:11.0" 00:19:57.537 }, 00:19:57.537 "ctrlr_data": { 00:19:57.537 "cntlid": 0, 00:19:57.537 "vendor_id": "0x1b36", 00:19:57.537 "model_number": "QEMU NVMe Ctrl", 00:19:57.537 "serial_number": "12341", 00:19:57.537 "firmware_revision": "8.0.0", 00:19:57.537 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:57.537 "oacs": { 00:19:57.537 "security": 0, 00:19:57.537 "format": 1, 00:19:57.537 "firmware": 0, 00:19:57.537 "ns_manage": 1 00:19:57.537 }, 00:19:57.537 "multi_ctrlr": false, 00:19:57.537 "ana_reporting": false 00:19:57.537 }, 00:19:57.537 "vs": { 00:19:57.537 "nvme_version": "1.4" 00:19:57.537 }, 00:19:57.537 "ns_data": { 00:19:57.537 "id": 1, 00:19:57.537 "can_share": false 00:19:57.537 } 00:19:57.537 } 00:19:57.537 ], 00:19:57.537 "mp_policy": "active_passive" 00:19:57.537 } 00:19:57.537 } 00:19:57.537 ]' 00:19:57.537 12:45:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:57.537 12:45:57 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:57.537 12:45:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:57.799 12:45:57 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:57.799 12:45:57 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:57.799 12:45:57 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:57.799 12:45:57 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:57.799 12:45:57 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:57.799 12:45:57 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:57.799 12:45:57 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:57.799 12:45:57 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:57.799 12:45:57 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=52894748-ca45-4566-9b93-c5351797e5ba 00:19:57.799 12:45:57 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:57.799 12:45:57 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 52894748-ca45-4566-9b93-c5351797e5ba 00:19:58.058 12:45:57 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:58.316 12:45:57 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=bc57d621-984f-450b-b9e7-31f8e6c41ba0 00:19:58.316 12:45:57 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u bc57d621-984f-450b-b9e7-31f8e6c41ba0 00:19:58.574 12:45:58 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=07747dce-33e0-413e-bd91-4464c36c0a4b 00:19:58.574 12:45:58 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 07747dce-33e0-413e-bd91-4464c36c0a4b 00:19:58.574 12:45:58 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:58.574 12:45:58 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:58.574 12:45:58 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=07747dce-33e0-413e-bd91-4464c36c0a4b 00:19:58.574 12:45:58 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:58.574 12:45:58 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 07747dce-33e0-413e-bd91-4464c36c0a4b 00:19:58.574 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=07747dce-33e0-413e-bd91-4464c36c0a4b 00:19:58.574 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:58.574 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:58.574 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:58.574 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 07747dce-33e0-413e-bd91-4464c36c0a4b 00:19:58.832 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:58.832 { 00:19:58.832 "name": "07747dce-33e0-413e-bd91-4464c36c0a4b", 00:19:58.832 "aliases": [ 00:19:58.832 "lvs/nvme0n1p0" 00:19:58.832 ], 00:19:58.832 "product_name": "Logical Volume", 00:19:58.832 "block_size": 4096, 00:19:58.832 "num_blocks": 26476544, 00:19:58.832 "uuid": "07747dce-33e0-413e-bd91-4464c36c0a4b", 00:19:58.832 "assigned_rate_limits": { 00:19:58.832 "rw_ios_per_sec": 0, 00:19:58.832 "rw_mbytes_per_sec": 0, 00:19:58.832 "r_mbytes_per_sec": 0, 00:19:58.832 "w_mbytes_per_sec": 0 00:19:58.832 }, 00:19:58.832 "claimed": false, 00:19:58.832 "zoned": false, 00:19:58.832 "supported_io_types": { 00:19:58.832 "read": true, 00:19:58.832 "write": true, 00:19:58.832 "unmap": true, 00:19:58.832 "flush": false, 00:19:58.832 "reset": true, 00:19:58.832 "nvme_admin": false, 00:19:58.832 "nvme_io": false, 00:19:58.832 "nvme_io_md": false, 00:19:58.832 "write_zeroes": true, 00:19:58.832 "zcopy": false, 00:19:58.832 "get_zone_info": false, 00:19:58.832 "zone_management": false, 00:19:58.832 "zone_append": false, 00:19:58.832 "compare": false, 00:19:58.832 "compare_and_write": false, 00:19:58.832 "abort": false, 00:19:58.832 "seek_hole": true, 00:19:58.833 "seek_data": true, 00:19:58.833 "copy": false, 00:19:58.833 "nvme_iov_md": false 00:19:58.833 }, 00:19:58.833 "driver_specific": { 00:19:58.833 "lvol": { 00:19:58.833 "lvol_store_uuid": "bc57d621-984f-450b-b9e7-31f8e6c41ba0", 00:19:58.833 "base_bdev": "nvme0n1", 00:19:58.833 "thin_provision": true, 00:19:58.833 "num_allocated_clusters": 0, 00:19:58.833 "snapshot": false, 00:19:58.833 "clone": false, 00:19:58.833 "esnap_clone": false 00:19:58.833 } 00:19:58.833 } 00:19:58.833 } 00:19:58.833 ]' 00:19:58.833 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:58.833 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:58.833 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:58.833 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:58.833 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:58.833 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:58.833 12:45:58 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:58.833 12:45:58 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:58.833 12:45:58 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:59.091 12:45:58 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:59.091 12:45:58 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:59.091 12:45:58 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 07747dce-33e0-413e-bd91-4464c36c0a4b 00:19:59.091 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=07747dce-33e0-413e-bd91-4464c36c0a4b 00:19:59.091 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:59.091 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:59.091 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:59.091 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 07747dce-33e0-413e-bd91-4464c36c0a4b 00:19:59.349 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:59.349 { 00:19:59.349 "name": "07747dce-33e0-413e-bd91-4464c36c0a4b", 00:19:59.349 "aliases": [ 00:19:59.349 "lvs/nvme0n1p0" 00:19:59.349 ], 00:19:59.349 "product_name": "Logical Volume", 00:19:59.349 "block_size": 4096, 00:19:59.349 "num_blocks": 26476544, 00:19:59.349 "uuid": "07747dce-33e0-413e-bd91-4464c36c0a4b", 00:19:59.349 "assigned_rate_limits": { 00:19:59.349 "rw_ios_per_sec": 0, 00:19:59.349 "rw_mbytes_per_sec": 0, 00:19:59.349 "r_mbytes_per_sec": 0, 00:19:59.349 "w_mbytes_per_sec": 0 00:19:59.349 }, 00:19:59.349 "claimed": false, 00:19:59.349 "zoned": false, 00:19:59.349 "supported_io_types": { 00:19:59.349 "read": true, 00:19:59.349 "write": true, 00:19:59.349 "unmap": true, 00:19:59.349 "flush": false, 00:19:59.349 "reset": true, 00:19:59.349 "nvme_admin": false, 00:19:59.349 "nvme_io": false, 00:19:59.349 "nvme_io_md": false, 00:19:59.349 "write_zeroes": true, 00:19:59.349 "zcopy": false, 00:19:59.349 "get_zone_info": false, 00:19:59.349 "zone_management": false, 00:19:59.349 "zone_append": false, 00:19:59.349 "compare": false, 00:19:59.349 "compare_and_write": false, 00:19:59.349 "abort": false, 00:19:59.349 "seek_hole": true, 00:19:59.349 "seek_data": true, 00:19:59.349 "copy": false, 00:19:59.349 "nvme_iov_md": false 00:19:59.349 }, 00:19:59.349 "driver_specific": { 00:19:59.349 "lvol": { 00:19:59.349 "lvol_store_uuid": "bc57d621-984f-450b-b9e7-31f8e6c41ba0", 00:19:59.349 "base_bdev": "nvme0n1", 00:19:59.349 "thin_provision": true, 00:19:59.349 "num_allocated_clusters": 0, 00:19:59.349 "snapshot": false, 00:19:59.349 "clone": false, 00:19:59.349 "esnap_clone": false 00:19:59.349 } 00:19:59.349 } 00:19:59.349 } 00:19:59.349 ]' 00:19:59.349 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:59.349 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:59.349 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:59.349 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:59.349 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:59.349 12:45:58 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:59.349 12:45:58 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:59.349 12:45:58 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:59.608 12:45:59 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:59.608 12:45:59 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:59.608 12:45:59 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 07747dce-33e0-413e-bd91-4464c36c0a4b 00:19:59.608 12:45:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=07747dce-33e0-413e-bd91-4464c36c0a4b 00:19:59.608 12:45:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:59.608 12:45:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:59.608 12:45:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:59.608 12:45:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 07747dce-33e0-413e-bd91-4464c36c0a4b 00:19:59.608 12:45:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:59.608 { 00:19:59.608 "name": "07747dce-33e0-413e-bd91-4464c36c0a4b", 00:19:59.608 "aliases": [ 00:19:59.608 "lvs/nvme0n1p0" 00:19:59.608 ], 00:19:59.608 "product_name": "Logical Volume", 00:19:59.608 "block_size": 4096, 00:19:59.608 "num_blocks": 26476544, 00:19:59.608 "uuid": "07747dce-33e0-413e-bd91-4464c36c0a4b", 00:19:59.608 "assigned_rate_limits": { 00:19:59.608 "rw_ios_per_sec": 0, 00:19:59.608 "rw_mbytes_per_sec": 0, 00:19:59.608 "r_mbytes_per_sec": 0, 00:19:59.608 "w_mbytes_per_sec": 0 00:19:59.608 }, 00:19:59.608 "claimed": false, 00:19:59.608 "zoned": false, 00:19:59.608 "supported_io_types": { 00:19:59.608 "read": true, 00:19:59.608 "write": true, 00:19:59.608 "unmap": true, 00:19:59.608 "flush": false, 00:19:59.608 "reset": true, 00:19:59.608 "nvme_admin": false, 00:19:59.608 "nvme_io": false, 00:19:59.608 "nvme_io_md": false, 00:19:59.608 "write_zeroes": true, 00:19:59.608 "zcopy": false, 00:19:59.608 "get_zone_info": false, 00:19:59.608 "zone_management": false, 00:19:59.608 "zone_append": false, 00:19:59.608 "compare": false, 00:19:59.608 "compare_and_write": false, 00:19:59.608 "abort": false, 00:19:59.608 "seek_hole": true, 00:19:59.608 "seek_data": true, 00:19:59.608 "copy": false, 00:19:59.608 "nvme_iov_md": false 00:19:59.608 }, 00:19:59.608 "driver_specific": { 00:19:59.608 "lvol": { 00:19:59.608 "lvol_store_uuid": "bc57d621-984f-450b-b9e7-31f8e6c41ba0", 00:19:59.608 "base_bdev": "nvme0n1", 00:19:59.608 "thin_provision": true, 00:19:59.608 "num_allocated_clusters": 0, 00:19:59.608 "snapshot": false, 00:19:59.608 "clone": false, 00:19:59.608 "esnap_clone": false 00:19:59.608 } 00:19:59.608 } 00:19:59.608 } 00:19:59.608 ]' 00:19:59.608 12:45:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:59.867 12:45:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:59.867 12:45:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:59.867 12:45:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:59.867 12:45:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:59.867 12:45:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:59.867 12:45:59 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:59.867 12:45:59 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 07747dce-33e0-413e-bd91-4464c36c0a4b -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:59.867 [2024-12-14 12:45:59.596881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.867 [2024-12-14 12:45:59.596920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:59.867 [2024-12-14 12:45:59.596934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:59.867 [2024-12-14 12:45:59.596941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.867 [2024-12-14 12:45:59.599202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.867 [2024-12-14 12:45:59.599334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:59.867 [2024-12-14 12:45:59.599350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.236 ms 00:19:59.867 [2024-12-14 12:45:59.599356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.867 [2024-12-14 12:45:59.599432] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:59.867 [2024-12-14 12:45:59.599963] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:59.867 [2024-12-14 12:45:59.599983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.867 [2024-12-14 12:45:59.599990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:59.867 [2024-12-14 12:45:59.599998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:19:59.867 [2024-12-14 12:45:59.600003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.867 [2024-12-14 12:45:59.600331] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 000400a1-d0eb-4378-a146-f83adc96f65b 00:19:59.867 [2024-12-14 12:45:59.601332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.867 [2024-12-14 12:45:59.601363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:59.867 [2024-12-14 12:45:59.601373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:59.867 [2024-12-14 12:45:59.601380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.125 [2024-12-14 12:45:59.606666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.125 [2024-12-14 12:45:59.606778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.125 [2024-12-14 12:45:59.606792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.206 ms 00:20:00.125 [2024-12-14 12:45:59.606799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.125 [2024-12-14 12:45:59.606900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.125 [2024-12-14 12:45:59.606910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.125 [2024-12-14 12:45:59.606917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:00.125 [2024-12-14 12:45:59.606927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.125 [2024-12-14 12:45:59.606958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.125 [2024-12-14 12:45:59.606966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:00.125 [2024-12-14 12:45:59.606972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:00.125 [2024-12-14 12:45:59.606981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.125 [2024-12-14 12:45:59.607005] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:00.126 [2024-12-14 12:45:59.609935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.126 [2024-12-14 12:45:59.610030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.126 [2024-12-14 12:45:59.610046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.934 ms 00:20:00.126 [2024-12-14 12:45:59.610052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.126 [2024-12-14 12:45:59.610100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.126 [2024-12-14 12:45:59.610116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:00.126 [2024-12-14 12:45:59.610124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:00.126 [2024-12-14 12:45:59.610130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.126 [2024-12-14 12:45:59.610156] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:00.126 [2024-12-14 12:45:59.610264] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:00.126 [2024-12-14 12:45:59.610276] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:00.126 [2024-12-14 12:45:59.610284] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:00.126 [2024-12-14 12:45:59.610293] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:00.126 [2024-12-14 12:45:59.610299] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:00.126 [2024-12-14 12:45:59.610306] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:00.126 [2024-12-14 12:45:59.610312] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:00.126 [2024-12-14 12:45:59.610320] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:00.126 [2024-12-14 12:45:59.610326] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:00.126 [2024-12-14 12:45:59.610334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.126 [2024-12-14 12:45:59.610339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:00.126 [2024-12-14 12:45:59.610346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:20:00.126 [2024-12-14 12:45:59.610352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.126 [2024-12-14 12:45:59.610434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.126 [2024-12-14 12:45:59.610440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:00.126 [2024-12-14 12:45:59.610447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:00.126 [2024-12-14 12:45:59.610452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.126 [2024-12-14 12:45:59.610564] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:00.126 [2024-12-14 12:45:59.610572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:00.126 [2024-12-14 12:45:59.610579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.126 [2024-12-14 12:45:59.610585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.126 [2024-12-14 12:45:59.610592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:00.126 [2024-12-14 12:45:59.610597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:00.126 [2024-12-14 12:45:59.610603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:00.126 [2024-12-14 12:45:59.610608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:00.126 [2024-12-14 12:45:59.610615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:00.126 [2024-12-14 12:45:59.610619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.126 [2024-12-14 12:45:59.610627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:00.126 [2024-12-14 12:45:59.610631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:00.126 [2024-12-14 12:45:59.610638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.126 [2024-12-14 12:45:59.610643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:00.126 [2024-12-14 12:45:59.610650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:00.126 [2024-12-14 12:45:59.610655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.126 [2024-12-14 12:45:59.610662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:00.126 [2024-12-14 12:45:59.610667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:00.126 [2024-12-14 12:45:59.610673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.126 [2024-12-14 12:45:59.610678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:00.126 [2024-12-14 12:45:59.610684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:00.126 [2024-12-14 12:45:59.610689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.126 [2024-12-14 12:45:59.610696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:00.126 [2024-12-14 12:45:59.610701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:00.126 [2024-12-14 12:45:59.610708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.126 [2024-12-14 12:45:59.610712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:00.126 [2024-12-14 12:45:59.610719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:00.126 [2024-12-14 12:45:59.610724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.126 [2024-12-14 12:45:59.610730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:00.126 [2024-12-14 12:45:59.610735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:00.126 [2024-12-14 12:45:59.610741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.126 [2024-12-14 12:45:59.610746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:00.126 [2024-12-14 12:45:59.610754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:00.126 [2024-12-14 12:45:59.610758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.126 [2024-12-14 12:45:59.610765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:00.126 [2024-12-14 12:45:59.610769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:00.126 [2024-12-14 12:45:59.610777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.126 [2024-12-14 12:45:59.610781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:00.126 [2024-12-14 12:45:59.610788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:00.126 [2024-12-14 12:45:59.610793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.126 [2024-12-14 12:45:59.610799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:00.126 [2024-12-14 12:45:59.610803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:00.126 [2024-12-14 12:45:59.610809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.126 [2024-12-14 12:45:59.610814] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:00.126 [2024-12-14 12:45:59.610821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:00.126 [2024-12-14 12:45:59.610826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.126 [2024-12-14 12:45:59.610832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.126 [2024-12-14 12:45:59.610838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:00.126 [2024-12-14 12:45:59.610846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:00.126 [2024-12-14 12:45:59.610851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:00.126 [2024-12-14 12:45:59.610857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:00.126 [2024-12-14 12:45:59.610862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:00.126 [2024-12-14 12:45:59.610868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:00.126 [2024-12-14 12:45:59.610874] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:00.126 [2024-12-14 12:45:59.610883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.126 [2024-12-14 12:45:59.610891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:00.126 [2024-12-14 12:45:59.610898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:00.126 [2024-12-14 12:45:59.610903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:00.126 [2024-12-14 12:45:59.610910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:00.126 [2024-12-14 12:45:59.610915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:00.126 [2024-12-14 12:45:59.610922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:00.126 [2024-12-14 12:45:59.610928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:00.126 [2024-12-14 12:45:59.610935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:00.126 [2024-12-14 12:45:59.610940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:00.126 [2024-12-14 12:45:59.610948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:00.126 [2024-12-14 12:45:59.610954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:00.127 [2024-12-14 12:45:59.610960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:00.127 [2024-12-14 12:45:59.610966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:00.127 [2024-12-14 12:45:59.610972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:00.127 [2024-12-14 12:45:59.610978] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:00.127 [2024-12-14 12:45:59.610987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.127 [2024-12-14 12:45:59.610993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:00.127 [2024-12-14 12:45:59.610999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:00.127 [2024-12-14 12:45:59.611004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:00.127 [2024-12-14 12:45:59.611012] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:00.127 [2024-12-14 12:45:59.611018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.127 [2024-12-14 12:45:59.611025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:00.127 [2024-12-14 12:45:59.611031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:20:00.127 [2024-12-14 12:45:59.611037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.127 [2024-12-14 12:45:59.611127] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:00.127 [2024-12-14 12:45:59.611139] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:02.657 [2024-12-14 12:46:02.014496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.014729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:02.657 [2024-12-14 12:46:02.014750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2403.360 ms 00:20:02.657 [2024-12-14 12:46:02.014761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.040675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.040719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:02.657 [2024-12-14 12:46:02.040731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.672 ms 00:20:02.657 [2024-12-14 12:46:02.040742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.040870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.040883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:02.657 [2024-12-14 12:46:02.040907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:02.657 [2024-12-14 12:46:02.040918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.082168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.082210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:02.657 [2024-12-14 12:46:02.082222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.222 ms 00:20:02.657 [2024-12-14 12:46:02.082233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.082323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.082337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:02.657 [2024-12-14 12:46:02.082345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:02.657 [2024-12-14 12:46:02.082354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.082687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.082707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:02.657 [2024-12-14 12:46:02.082715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:20:02.657 [2024-12-14 12:46:02.082724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.082837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.082847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:02.657 [2024-12-14 12:46:02.082870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:20:02.657 [2024-12-14 12:46:02.082881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.097459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.097491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:02.657 [2024-12-14 12:46:02.097500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.545 ms 00:20:02.657 [2024-12-14 12:46:02.097509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.109265] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:02.657 [2024-12-14 12:46:02.123983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.124015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:02.657 [2024-12-14 12:46:02.124027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.372 ms 00:20:02.657 [2024-12-14 12:46:02.124035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.189422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.189462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:02.657 [2024-12-14 12:46:02.189476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.302 ms 00:20:02.657 [2024-12-14 12:46:02.189484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.189740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.189753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:02.657 [2024-12-14 12:46:02.189766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:20:02.657 [2024-12-14 12:46:02.189774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.212928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.212959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:02.657 [2024-12-14 12:46:02.212972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.117 ms 00:20:02.657 [2024-12-14 12:46:02.212979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.235537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.235672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:02.657 [2024-12-14 12:46:02.235692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.496 ms 00:20:02.657 [2024-12-14 12:46:02.235699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.236297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.236317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:02.657 [2024-12-14 12:46:02.236329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:20:02.657 [2024-12-14 12:46:02.236336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.305139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.305173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:02.657 [2024-12-14 12:46:02.305188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.766 ms 00:20:02.657 [2024-12-14 12:46:02.305196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.329243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.329274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:02.657 [2024-12-14 12:46:02.329286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.949 ms 00:20:02.657 [2024-12-14 12:46:02.329294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.352278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.352307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:02.657 [2024-12-14 12:46:02.352319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.920 ms 00:20:02.657 [2024-12-14 12:46:02.352326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.375645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.375689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:02.657 [2024-12-14 12:46:02.375701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.246 ms 00:20:02.657 [2024-12-14 12:46:02.375709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.375774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.375786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:02.657 [2024-12-14 12:46:02.375798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:02.657 [2024-12-14 12:46:02.375806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.657 [2024-12-14 12:46:02.375877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.657 [2024-12-14 12:46:02.375887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:02.657 [2024-12-14 12:46:02.375896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:02.657 [2024-12-14 12:46:02.375903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.658 [2024-12-14 12:46:02.376996] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:02.658 [2024-12-14 12:46:02.379993] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2779.842 ms, result 0 00:20:02.658 [2024-12-14 12:46:02.380808] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:02.658 { 00:20:02.658 "name": "ftl0", 00:20:02.658 "uuid": "000400a1-d0eb-4378-a146-f83adc96f65b" 00:20:02.658 } 00:20:02.915 12:46:02 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:02.915 12:46:02 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:02.915 12:46:02 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:02.915 12:46:02 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:20:02.915 12:46:02 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:02.915 12:46:02 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:02.915 12:46:02 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:02.915 12:46:02 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:03.173 [ 00:20:03.173 { 00:20:03.173 "name": "ftl0", 00:20:03.173 "aliases": [ 00:20:03.173 "000400a1-d0eb-4378-a146-f83adc96f65b" 00:20:03.173 ], 00:20:03.173 "product_name": "FTL disk", 00:20:03.173 "block_size": 4096, 00:20:03.173 "num_blocks": 23592960, 00:20:03.173 "uuid": "000400a1-d0eb-4378-a146-f83adc96f65b", 00:20:03.173 "assigned_rate_limits": { 00:20:03.173 "rw_ios_per_sec": 0, 00:20:03.173 "rw_mbytes_per_sec": 0, 00:20:03.173 "r_mbytes_per_sec": 0, 00:20:03.173 "w_mbytes_per_sec": 0 00:20:03.173 }, 00:20:03.173 "claimed": false, 00:20:03.173 "zoned": false, 00:20:03.173 "supported_io_types": { 00:20:03.173 "read": true, 00:20:03.173 "write": true, 00:20:03.173 "unmap": true, 00:20:03.173 "flush": true, 00:20:03.173 "reset": false, 00:20:03.173 "nvme_admin": false, 00:20:03.173 "nvme_io": false, 00:20:03.173 "nvme_io_md": false, 00:20:03.173 "write_zeroes": true, 00:20:03.173 "zcopy": false, 00:20:03.173 "get_zone_info": false, 00:20:03.173 "zone_management": false, 00:20:03.173 "zone_append": false, 00:20:03.173 "compare": false, 00:20:03.173 "compare_and_write": false, 00:20:03.173 "abort": false, 00:20:03.173 "seek_hole": false, 00:20:03.173 "seek_data": false, 00:20:03.173 "copy": false, 00:20:03.173 "nvme_iov_md": false 00:20:03.173 }, 00:20:03.173 "driver_specific": { 00:20:03.173 "ftl": { 00:20:03.173 "base_bdev": "07747dce-33e0-413e-bd91-4464c36c0a4b", 00:20:03.173 "cache": "nvc0n1p0" 00:20:03.173 } 00:20:03.173 } 00:20:03.173 } 00:20:03.173 ] 00:20:03.173 12:46:02 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:20:03.173 12:46:02 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:03.173 12:46:02 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:03.431 12:46:02 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:03.431 12:46:03 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:03.690 12:46:03 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:03.690 { 00:20:03.690 "name": "ftl0", 00:20:03.690 "aliases": [ 00:20:03.690 "000400a1-d0eb-4378-a146-f83adc96f65b" 00:20:03.690 ], 00:20:03.690 "product_name": "FTL disk", 00:20:03.690 "block_size": 4096, 00:20:03.690 "num_blocks": 23592960, 00:20:03.690 "uuid": "000400a1-d0eb-4378-a146-f83adc96f65b", 00:20:03.690 "assigned_rate_limits": { 00:20:03.690 "rw_ios_per_sec": 0, 00:20:03.690 "rw_mbytes_per_sec": 0, 00:20:03.690 "r_mbytes_per_sec": 0, 00:20:03.690 "w_mbytes_per_sec": 0 00:20:03.690 }, 00:20:03.690 "claimed": false, 00:20:03.690 "zoned": false, 00:20:03.690 "supported_io_types": { 00:20:03.690 "read": true, 00:20:03.690 "write": true, 00:20:03.690 "unmap": true, 00:20:03.690 "flush": true, 00:20:03.690 "reset": false, 00:20:03.690 "nvme_admin": false, 00:20:03.690 "nvme_io": false, 00:20:03.690 "nvme_io_md": false, 00:20:03.690 "write_zeroes": true, 00:20:03.690 "zcopy": false, 00:20:03.690 "get_zone_info": false, 00:20:03.690 "zone_management": false, 00:20:03.690 "zone_append": false, 00:20:03.690 "compare": false, 00:20:03.690 "compare_and_write": false, 00:20:03.690 "abort": false, 00:20:03.690 "seek_hole": false, 00:20:03.690 "seek_data": false, 00:20:03.690 "copy": false, 00:20:03.690 "nvme_iov_md": false 00:20:03.690 }, 00:20:03.690 "driver_specific": { 00:20:03.690 "ftl": { 00:20:03.690 "base_bdev": "07747dce-33e0-413e-bd91-4464c36c0a4b", 00:20:03.690 "cache": "nvc0n1p0" 00:20:03.690 } 00:20:03.690 } 00:20:03.690 } 00:20:03.690 ]' 00:20:03.690 12:46:03 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:03.690 12:46:03 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:03.690 12:46:03 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:03.690 [2024-12-14 12:46:03.412528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.690 [2024-12-14 12:46:03.412571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:03.690 [2024-12-14 12:46:03.412586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:03.690 [2024-12-14 12:46:03.412597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.690 [2024-12-14 12:46:03.412629] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:03.690 [2024-12-14 12:46:03.415256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.690 [2024-12-14 12:46:03.415285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:03.690 [2024-12-14 12:46:03.415299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.610 ms 00:20:03.690 [2024-12-14 12:46:03.415307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.690 [2024-12-14 12:46:03.415865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.690 [2024-12-14 12:46:03.415886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:03.690 [2024-12-14 12:46:03.415897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:20:03.690 [2024-12-14 12:46:03.415904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.690 [2024-12-14 12:46:03.419555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.690 [2024-12-14 12:46:03.419579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:03.690 [2024-12-14 12:46:03.419591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.620 ms 00:20:03.690 [2024-12-14 12:46:03.419598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.949 [2024-12-14 12:46:03.426608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.949 [2024-12-14 12:46:03.426745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:03.949 [2024-12-14 12:46:03.426764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.962 ms 00:20:03.949 [2024-12-14 12:46:03.426772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.949 [2024-12-14 12:46:03.450257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.949 [2024-12-14 12:46:03.450365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:03.949 [2024-12-14 12:46:03.450421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.391 ms 00:20:03.949 [2024-12-14 12:46:03.450444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.949 [2024-12-14 12:46:03.465645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.949 [2024-12-14 12:46:03.465757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:03.949 [2024-12-14 12:46:03.465815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.115 ms 00:20:03.949 [2024-12-14 12:46:03.465840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.949 [2024-12-14 12:46:03.466074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.949 [2024-12-14 12:46:03.466105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:03.949 [2024-12-14 12:46:03.466128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:20:03.949 [2024-12-14 12:46:03.466182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.949 [2024-12-14 12:46:03.489064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.949 [2024-12-14 12:46:03.489170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:03.949 [2024-12-14 12:46:03.489225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.825 ms 00:20:03.949 [2024-12-14 12:46:03.489246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.949 [2024-12-14 12:46:03.511857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.949 [2024-12-14 12:46:03.511960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:03.949 [2024-12-14 12:46:03.512016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.497 ms 00:20:03.949 [2024-12-14 12:46:03.512037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.949 [2024-12-14 12:46:03.534391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.949 [2024-12-14 12:46:03.534502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:03.949 [2024-12-14 12:46:03.534554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.273 ms 00:20:03.949 [2024-12-14 12:46:03.534575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.949 [2024-12-14 12:46:03.556924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.950 [2024-12-14 12:46:03.557027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:03.950 [2024-12-14 12:46:03.557098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.222 ms 00:20:03.950 [2024-12-14 12:46:03.557121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.950 [2024-12-14 12:46:03.557195] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:03.950 [2024-12-14 12:46:03.557225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.557305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.557337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.557367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.557423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.557459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.557520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.557822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.557894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.557926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.557982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.558988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.559995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:03.950 [2024-12-14 12:46:03.560582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:03.951 [2024-12-14 12:46:03.560740] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:03.951 [2024-12-14 12:46:03.560750] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 000400a1-d0eb-4378-a146-f83adc96f65b 00:20:03.951 [2024-12-14 12:46:03.560758] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:03.951 [2024-12-14 12:46:03.560766] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:03.951 [2024-12-14 12:46:03.560774] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:03.951 [2024-12-14 12:46:03.560784] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:03.951 [2024-12-14 12:46:03.560791] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:03.951 [2024-12-14 12:46:03.560800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:03.951 [2024-12-14 12:46:03.560808] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:03.951 [2024-12-14 12:46:03.560815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:03.951 [2024-12-14 12:46:03.560821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:03.951 [2024-12-14 12:46:03.560830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.951 [2024-12-14 12:46:03.560838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:03.951 [2024-12-14 12:46:03.560849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.638 ms 00:20:03.951 [2024-12-14 12:46:03.560856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.951 [2024-12-14 12:46:03.573635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.951 [2024-12-14 12:46:03.573742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:03.951 [2024-12-14 12:46:03.573794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.726 ms 00:20:03.951 [2024-12-14 12:46:03.573838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.951 [2024-12-14 12:46:03.574248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.951 [2024-12-14 12:46:03.574346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:03.951 [2024-12-14 12:46:03.574400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:20:03.951 [2024-12-14 12:46:03.574422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.951 [2024-12-14 12:46:03.618276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.951 [2024-12-14 12:46:03.618385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:03.951 [2024-12-14 12:46:03.618436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.951 [2024-12-14 12:46:03.618459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.951 [2024-12-14 12:46:03.618602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.951 [2024-12-14 12:46:03.618630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:03.951 [2024-12-14 12:46:03.618683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.951 [2024-12-14 12:46:03.618705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.951 [2024-12-14 12:46:03.618782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.951 [2024-12-14 12:46:03.618807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:03.951 [2024-12-14 12:46:03.618832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.951 [2024-12-14 12:46:03.618880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.951 [2024-12-14 12:46:03.618930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.951 [2024-12-14 12:46:03.619029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:03.951 [2024-12-14 12:46:03.619066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.951 [2024-12-14 12:46:03.619138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.209 [2024-12-14 12:46:03.700454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.209 [2024-12-14 12:46:03.700593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:04.209 [2024-12-14 12:46:03.700649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.209 [2024-12-14 12:46:03.700671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.209 [2024-12-14 12:46:03.764348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.209 [2024-12-14 12:46:03.764482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:04.209 [2024-12-14 12:46:03.764554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.209 [2024-12-14 12:46:03.764578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.209 [2024-12-14 12:46:03.764691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.209 [2024-12-14 12:46:03.764716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:04.209 [2024-12-14 12:46:03.764782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.209 [2024-12-14 12:46:03.764807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.209 [2024-12-14 12:46:03.764877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.209 [2024-12-14 12:46:03.764898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:04.209 [2024-12-14 12:46:03.764919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.209 [2024-12-14 12:46:03.764964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.209 [2024-12-14 12:46:03.765152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.209 [2024-12-14 12:46:03.765183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:04.209 [2024-12-14 12:46:03.765237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.209 [2024-12-14 12:46:03.765296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.209 [2024-12-14 12:46:03.765374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.209 [2024-12-14 12:46:03.765424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:04.209 [2024-12-14 12:46:03.765451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.209 [2024-12-14 12:46:03.765469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.209 [2024-12-14 12:46:03.765529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.209 [2024-12-14 12:46:03.765630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:04.209 [2024-12-14 12:46:03.765668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.209 [2024-12-14 12:46:03.765688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.209 [2024-12-14 12:46:03.765761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:04.209 [2024-12-14 12:46:03.765786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:04.209 [2024-12-14 12:46:03.765807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:04.209 [2024-12-14 12:46:03.765827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.209 [2024-12-14 12:46:03.766114] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 353.549 ms, result 0 00:20:04.209 true 00:20:04.209 12:46:03 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78080 00:20:04.209 12:46:03 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78080 ']' 00:20:04.209 12:46:03 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78080 00:20:04.209 12:46:03 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:04.209 12:46:03 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.209 12:46:03 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78080 00:20:04.209 12:46:03 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.209 killing process with pid 78080 00:20:04.209 12:46:03 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.209 12:46:03 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78080' 00:20:04.209 12:46:03 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78080 00:20:04.209 12:46:03 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78080 00:20:10.777 12:46:09 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:11.350 65536+0 records in 00:20:11.350 65536+0 records out 00:20:11.350 268435456 bytes (268 MB, 256 MiB) copied, 1.06169 s, 253 MB/s 00:20:11.350 12:46:10 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:11.350 [2024-12-14 12:46:11.032646] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:11.350 [2024-12-14 12:46:11.032766] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78257 ] 00:20:11.610 [2024-12-14 12:46:11.192469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.610 [2024-12-14 12:46:11.293564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.869 [2024-12-14 12:46:11.589579] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:11.869 [2024-12-14 12:46:11.589681] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:12.131 [2024-12-14 12:46:11.748186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.131 [2024-12-14 12:46:11.748230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:12.131 [2024-12-14 12:46:11.748243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:12.131 [2024-12-14 12:46:11.748251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.131 [2024-12-14 12:46:11.754805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.131 [2024-12-14 12:46:11.754910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:12.131 [2024-12-14 12:46:11.754943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.532 ms 00:20:12.131 [2024-12-14 12:46:11.754968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.131 [2024-12-14 12:46:11.755667] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:12.131 [2024-12-14 12:46:11.758123] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:12.131 [2024-12-14 12:46:11.758198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.131 [2024-12-14 12:46:11.758223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:12.131 [2024-12-14 12:46:11.758248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.560 ms 00:20:12.131 [2024-12-14 12:46:11.758270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.131 [2024-12-14 12:46:11.760300] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:12.131 [2024-12-14 12:46:11.773853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.131 [2024-12-14 12:46:11.773972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:12.131 [2024-12-14 12:46:11.773989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.561 ms 00:20:12.131 [2024-12-14 12:46:11.773996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.131 [2024-12-14 12:46:11.774100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.131 [2024-12-14 12:46:11.774112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:12.131 [2024-12-14 12:46:11.774123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:12.131 [2024-12-14 12:46:11.774130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.131 [2024-12-14 12:46:11.779189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.131 [2024-12-14 12:46:11.779218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:12.131 [2024-12-14 12:46:11.779228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.018 ms 00:20:12.131 [2024-12-14 12:46:11.779235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.131 [2024-12-14 12:46:11.779320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.131 [2024-12-14 12:46:11.779329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:12.131 [2024-12-14 12:46:11.779337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:12.131 [2024-12-14 12:46:11.779344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.131 [2024-12-14 12:46:11.779372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.131 [2024-12-14 12:46:11.779381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:12.131 [2024-12-14 12:46:11.779388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:12.131 [2024-12-14 12:46:11.779395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.131 [2024-12-14 12:46:11.779414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:12.131 [2024-12-14 12:46:11.782698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.131 [2024-12-14 12:46:11.782727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:12.131 [2024-12-14 12:46:11.782736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.288 ms 00:20:12.132 [2024-12-14 12:46:11.782743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.132 [2024-12-14 12:46:11.782780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.132 [2024-12-14 12:46:11.782789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:12.132 [2024-12-14 12:46:11.782797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:12.132 [2024-12-14 12:46:11.782803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.132 [2024-12-14 12:46:11.782822] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:12.132 [2024-12-14 12:46:11.782840] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:12.132 [2024-12-14 12:46:11.782873] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:12.132 [2024-12-14 12:46:11.782887] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:12.132 [2024-12-14 12:46:11.782989] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:12.132 [2024-12-14 12:46:11.782999] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:12.132 [2024-12-14 12:46:11.783009] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:12.132 [2024-12-14 12:46:11.783021] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:12.132 [2024-12-14 12:46:11.783030] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:12.132 [2024-12-14 12:46:11.783037] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:12.132 [2024-12-14 12:46:11.783044] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:12.132 [2024-12-14 12:46:11.783052] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:12.132 [2024-12-14 12:46:11.783070] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:12.132 [2024-12-14 12:46:11.783078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.132 [2024-12-14 12:46:11.783085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:12.132 [2024-12-14 12:46:11.783093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:20:12.132 [2024-12-14 12:46:11.783100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.132 [2024-12-14 12:46:11.783187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.132 [2024-12-14 12:46:11.783197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:12.132 [2024-12-14 12:46:11.783205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:12.132 [2024-12-14 12:46:11.783212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.132 [2024-12-14 12:46:11.783311] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:12.132 [2024-12-14 12:46:11.783321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:12.132 [2024-12-14 12:46:11.783329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:12.132 [2024-12-14 12:46:11.783336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.132 [2024-12-14 12:46:11.783344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:12.132 [2024-12-14 12:46:11.783350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:12.132 [2024-12-14 12:46:11.783357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:12.132 [2024-12-14 12:46:11.783364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:12.132 [2024-12-14 12:46:11.783371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:12.132 [2024-12-14 12:46:11.783378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:12.132 [2024-12-14 12:46:11.783385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:12.132 [2024-12-14 12:46:11.783397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:12.132 [2024-12-14 12:46:11.783404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:12.132 [2024-12-14 12:46:11.783410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:12.132 [2024-12-14 12:46:11.783417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:12.132 [2024-12-14 12:46:11.783423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.132 [2024-12-14 12:46:11.783430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:12.132 [2024-12-14 12:46:11.783437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:12.132 [2024-12-14 12:46:11.783443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.132 [2024-12-14 12:46:11.783449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:12.132 [2024-12-14 12:46:11.783456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:12.132 [2024-12-14 12:46:11.783462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.132 [2024-12-14 12:46:11.783468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:12.132 [2024-12-14 12:46:11.783474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:12.132 [2024-12-14 12:46:11.783481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.132 [2024-12-14 12:46:11.783488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:12.132 [2024-12-14 12:46:11.783494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:12.132 [2024-12-14 12:46:11.783500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.132 [2024-12-14 12:46:11.783507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:12.132 [2024-12-14 12:46:11.783513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:12.132 [2024-12-14 12:46:11.783519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:12.132 [2024-12-14 12:46:11.783525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:12.132 [2024-12-14 12:46:11.783532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:12.132 [2024-12-14 12:46:11.783538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:12.132 [2024-12-14 12:46:11.783544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:12.132 [2024-12-14 12:46:11.783550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:12.132 [2024-12-14 12:46:11.783556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:12.132 [2024-12-14 12:46:11.783562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:12.132 [2024-12-14 12:46:11.783568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:12.132 [2024-12-14 12:46:11.783575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.132 [2024-12-14 12:46:11.783581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:12.132 [2024-12-14 12:46:11.783587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:12.132 [2024-12-14 12:46:11.783594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.132 [2024-12-14 12:46:11.783601] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:12.132 [2024-12-14 12:46:11.783608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:12.132 [2024-12-14 12:46:11.783617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:12.132 [2024-12-14 12:46:11.783624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:12.132 [2024-12-14 12:46:11.783631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:12.132 [2024-12-14 12:46:11.783637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:12.132 [2024-12-14 12:46:11.783645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:12.132 [2024-12-14 12:46:11.783652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:12.132 [2024-12-14 12:46:11.783658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:12.132 [2024-12-14 12:46:11.783664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:12.132 [2024-12-14 12:46:11.783672] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:12.132 [2024-12-14 12:46:11.783681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:12.132 [2024-12-14 12:46:11.783689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:12.132 [2024-12-14 12:46:11.783696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:12.132 [2024-12-14 12:46:11.783703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:12.132 [2024-12-14 12:46:11.783709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:12.132 [2024-12-14 12:46:11.783716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:12.132 [2024-12-14 12:46:11.783723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:12.132 [2024-12-14 12:46:11.783730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:12.132 [2024-12-14 12:46:11.783737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:12.132 [2024-12-14 12:46:11.783744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:12.132 [2024-12-14 12:46:11.783750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:12.132 [2024-12-14 12:46:11.783757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:12.132 [2024-12-14 12:46:11.783764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:12.132 [2024-12-14 12:46:11.783770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:12.132 [2024-12-14 12:46:11.783777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:12.132 [2024-12-14 12:46:11.783784] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:12.132 [2024-12-14 12:46:11.783792] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:12.132 [2024-12-14 12:46:11.783800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:12.133 [2024-12-14 12:46:11.783807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:12.133 [2024-12-14 12:46:11.783814] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:12.133 [2024-12-14 12:46:11.783822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:12.133 [2024-12-14 12:46:11.783829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.133 [2024-12-14 12:46:11.783839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:12.133 [2024-12-14 12:46:11.783846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:20:12.133 [2024-12-14 12:46:11.783852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.133 [2024-12-14 12:46:11.810402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.133 [2024-12-14 12:46:11.810527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:12.133 [2024-12-14 12:46:11.810583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.487 ms 00:20:12.133 [2024-12-14 12:46:11.810606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.133 [2024-12-14 12:46:11.810748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.133 [2024-12-14 12:46:11.810819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:12.133 [2024-12-14 12:46:11.810844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:12.133 [2024-12-14 12:46:11.810863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.133 [2024-12-14 12:46:11.854076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.133 [2024-12-14 12:46:11.854219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:12.133 [2024-12-14 12:46:11.854299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.152 ms 00:20:12.133 [2024-12-14 12:46:11.854323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.133 [2024-12-14 12:46:11.854430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.133 [2024-12-14 12:46:11.854458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:12.133 [2024-12-14 12:46:11.854529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:12.133 [2024-12-14 12:46:11.854552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.133 [2024-12-14 12:46:11.854915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.133 [2024-12-14 12:46:11.855352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:12.133 [2024-12-14 12:46:11.855400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:20:12.133 [2024-12-14 12:46:11.855539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.133 [2024-12-14 12:46:11.855950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.133 [2024-12-14 12:46:11.855980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:12.133 [2024-12-14 12:46:11.855991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:20:12.133 [2024-12-14 12:46:11.855999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:11.869903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.393 [2024-12-14 12:46:11.869935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:12.393 [2024-12-14 12:46:11.869945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.883 ms 00:20:12.393 [2024-12-14 12:46:11.869952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:11.882804] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:12.393 [2024-12-14 12:46:11.882837] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:12.393 [2024-12-14 12:46:11.882849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.393 [2024-12-14 12:46:11.882857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:12.393 [2024-12-14 12:46:11.882865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.775 ms 00:20:12.393 [2024-12-14 12:46:11.882872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:11.907076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.393 [2024-12-14 12:46:11.907108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:12.393 [2024-12-14 12:46:11.907119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.137 ms 00:20:12.393 [2024-12-14 12:46:11.907127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:11.918920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.393 [2024-12-14 12:46:11.918948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:12.393 [2024-12-14 12:46:11.918957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.727 ms 00:20:12.393 [2024-12-14 12:46:11.918963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:11.930099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.393 [2024-12-14 12:46:11.930125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:12.393 [2024-12-14 12:46:11.930135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.075 ms 00:20:12.393 [2024-12-14 12:46:11.930141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:11.930752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.393 [2024-12-14 12:46:11.930833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:12.393 [2024-12-14 12:46:11.930842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:20:12.393 [2024-12-14 12:46:11.930849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:11.985520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.393 [2024-12-14 12:46:11.985685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:12.393 [2024-12-14 12:46:11.985704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.649 ms 00:20:12.393 [2024-12-14 12:46:11.985713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:11.996226] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:12.393 [2024-12-14 12:46:12.010303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.393 [2024-12-14 12:46:12.010340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:12.393 [2024-12-14 12:46:12.010351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.505 ms 00:20:12.393 [2024-12-14 12:46:12.010359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:12.010443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.393 [2024-12-14 12:46:12.010454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:12.393 [2024-12-14 12:46:12.010463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:12.393 [2024-12-14 12:46:12.010470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:12.010516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.393 [2024-12-14 12:46:12.010525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:12.393 [2024-12-14 12:46:12.010533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:12.393 [2024-12-14 12:46:12.010541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:12.010573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.393 [2024-12-14 12:46:12.010584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:12.393 [2024-12-14 12:46:12.010592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:12.393 [2024-12-14 12:46:12.010600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:12.010629] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:12.393 [2024-12-14 12:46:12.010639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.393 [2024-12-14 12:46:12.010646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:12.393 [2024-12-14 12:46:12.010654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:12.393 [2024-12-14 12:46:12.010661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:12.034692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.393 [2024-12-14 12:46:12.034816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:12.393 [2024-12-14 12:46:12.034833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.008 ms 00:20:12.393 [2024-12-14 12:46:12.034841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:12.034929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.393 [2024-12-14 12:46:12.034939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:12.393 [2024-12-14 12:46:12.034948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:12.393 [2024-12-14 12:46:12.034956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.393 [2024-12-14 12:46:12.035814] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:12.393 [2024-12-14 12:46:12.038903] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 287.345 ms, result 0 00:20:12.393 [2024-12-14 12:46:12.040178] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:12.393 [2024-12-14 12:46:12.053254] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:13.338  [2024-12-14T12:46:14.516Z] Copying: 14/256 [MB] (14 MBps) [2024-12-14T12:46:15.088Z] Copying: 30/256 [MB] (15 MBps) [2024-12-14T12:46:16.474Z] Copying: 41/256 [MB] (10 MBps) [2024-12-14T12:46:17.417Z] Copying: 67/256 [MB] (26 MBps) [2024-12-14T12:46:18.362Z] Copying: 84/256 [MB] (17 MBps) [2024-12-14T12:46:19.306Z] Copying: 101/256 [MB] (16 MBps) [2024-12-14T12:46:20.247Z] Copying: 115/256 [MB] (13 MBps) [2024-12-14T12:46:21.190Z] Copying: 128/256 [MB] (13 MBps) [2024-12-14T12:46:22.132Z] Copying: 140/256 [MB] (12 MBps) [2024-12-14T12:46:23.076Z] Copying: 153/256 [MB] (12 MBps) [2024-12-14T12:46:24.462Z] Copying: 167200/262144 [kB] (10128 kBps) [2024-12-14T12:46:25.405Z] Copying: 173/256 [MB] (10 MBps) [2024-12-14T12:46:26.349Z] Copying: 187800/262144 [kB] (10212 kBps) [2024-12-14T12:46:27.292Z] Copying: 193/256 [MB] (10 MBps) [2024-12-14T12:46:27.554Z] Copying: 235/256 [MB] (41 MBps) [2024-12-14T12:46:27.554Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-14 12:46:27.515416] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:27.817 [2024-12-14 12:46:27.522612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.817 [2024-12-14 12:46:27.522731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:27.817 [2024-12-14 12:46:27.522746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:27.817 [2024-12-14 12:46:27.522753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.817 [2024-12-14 12:46:27.522778] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:27.817 [2024-12-14 12:46:27.524843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.817 [2024-12-14 12:46:27.524862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:27.817 [2024-12-14 12:46:27.524871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:20:27.817 [2024-12-14 12:46:27.524877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.817 [2024-12-14 12:46:27.526476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.817 [2024-12-14 12:46:27.526502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:27.817 [2024-12-14 12:46:27.526510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.573 ms 00:20:27.817 [2024-12-14 12:46:27.526516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.817 [2024-12-14 12:46:27.532231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.817 [2024-12-14 12:46:27.532259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:27.817 [2024-12-14 12:46:27.532267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.702 ms 00:20:27.817 [2024-12-14 12:46:27.532273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.817 [2024-12-14 12:46:27.538045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.817 [2024-12-14 12:46:27.538159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:27.817 [2024-12-14 12:46:27.538172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.747 ms 00:20:27.817 [2024-12-14 12:46:27.538178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.079 [2024-12-14 12:46:27.555378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.079 [2024-12-14 12:46:27.555403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:28.079 [2024-12-14 12:46:27.555412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.162 ms 00:20:28.079 [2024-12-14 12:46:27.555417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.079 [2024-12-14 12:46:27.566800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.079 [2024-12-14 12:46:27.566829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:28.079 [2024-12-14 12:46:27.566840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.357 ms 00:20:28.079 [2024-12-14 12:46:27.566847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.079 [2024-12-14 12:46:27.566941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.079 [2024-12-14 12:46:27.566948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:28.079 [2024-12-14 12:46:27.566954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:28.079 [2024-12-14 12:46:27.566965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.079 [2024-12-14 12:46:27.584886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.079 [2024-12-14 12:46:27.584910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:28.079 [2024-12-14 12:46:27.584918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.910 ms 00:20:28.079 [2024-12-14 12:46:27.584923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.079 [2024-12-14 12:46:27.602300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.079 [2024-12-14 12:46:27.602323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:28.079 [2024-12-14 12:46:27.602331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.340 ms 00:20:28.079 [2024-12-14 12:46:27.602336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.079 [2024-12-14 12:46:27.619437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.079 [2024-12-14 12:46:27.619461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:28.079 [2024-12-14 12:46:27.619469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.076 ms 00:20:28.079 [2024-12-14 12:46:27.619475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.079 [2024-12-14 12:46:27.635969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.079 [2024-12-14 12:46:27.635993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:28.079 [2024-12-14 12:46:27.636001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.449 ms 00:20:28.079 [2024-12-14 12:46:27.636006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.079 [2024-12-14 12:46:27.636031] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:28.079 [2024-12-14 12:46:27.636042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:28.079 [2024-12-14 12:46:27.636050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:28.079 [2024-12-14 12:46:27.636068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:28.079 [2024-12-14 12:46:27.636074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:28.079 [2024-12-14 12:46:27.636080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:28.079 [2024-12-14 12:46:27.636085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:28.079 [2024-12-14 12:46:27.636091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:28.079 [2024-12-14 12:46:27.636096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:28.079 [2024-12-14 12:46:27.636102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:28.080 [2024-12-14 12:46:27.636646] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:28.080 [2024-12-14 12:46:27.636652] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 000400a1-d0eb-4378-a146-f83adc96f65b 00:20:28.081 [2024-12-14 12:46:27.636661] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:28.081 [2024-12-14 12:46:27.636667] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:28.081 [2024-12-14 12:46:27.636672] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:28.081 [2024-12-14 12:46:27.636678] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:28.081 [2024-12-14 12:46:27.636683] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:28.081 [2024-12-14 12:46:27.636689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:28.081 [2024-12-14 12:46:27.636694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:28.081 [2024-12-14 12:46:27.636699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:28.081 [2024-12-14 12:46:27.636704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:28.081 [2024-12-14 12:46:27.636709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.081 [2024-12-14 12:46:27.636717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:28.081 [2024-12-14 12:46:27.636723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:20:28.081 [2024-12-14 12:46:27.636729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.646011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.081 [2024-12-14 12:46:27.646034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:28.081 [2024-12-14 12:46:27.646041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.269 ms 00:20:28.081 [2024-12-14 12:46:27.646046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.646332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:28.081 [2024-12-14 12:46:27.646347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:28.081 [2024-12-14 12:46:27.646354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:20:28.081 [2024-12-14 12:46:27.646359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.673776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.081 [2024-12-14 12:46:27.673802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:28.081 [2024-12-14 12:46:27.673810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.081 [2024-12-14 12:46:27.673816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.673869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.081 [2024-12-14 12:46:27.673876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:28.081 [2024-12-14 12:46:27.673882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.081 [2024-12-14 12:46:27.673887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.673918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.081 [2024-12-14 12:46:27.673926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:28.081 [2024-12-14 12:46:27.673931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.081 [2024-12-14 12:46:27.673936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.673949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.081 [2024-12-14 12:46:27.673956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:28.081 [2024-12-14 12:46:27.673962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.081 [2024-12-14 12:46:27.673968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.731812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.081 [2024-12-14 12:46:27.731841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:28.081 [2024-12-14 12:46:27.731850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.081 [2024-12-14 12:46:27.731856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.779521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.081 [2024-12-14 12:46:27.779550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:28.081 [2024-12-14 12:46:27.779558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.081 [2024-12-14 12:46:27.779565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.779601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.081 [2024-12-14 12:46:27.779608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:28.081 [2024-12-14 12:46:27.779614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.081 [2024-12-14 12:46:27.779620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.779643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.081 [2024-12-14 12:46:27.779650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:28.081 [2024-12-14 12:46:27.779659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.081 [2024-12-14 12:46:27.779665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.779735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.081 [2024-12-14 12:46:27.779742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:28.081 [2024-12-14 12:46:27.779748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.081 [2024-12-14 12:46:27.779753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.779776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.081 [2024-12-14 12:46:27.779782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:28.081 [2024-12-14 12:46:27.779788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.081 [2024-12-14 12:46:27.779796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.779825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.081 [2024-12-14 12:46:27.779832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:28.081 [2024-12-14 12:46:27.779838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.081 [2024-12-14 12:46:27.779843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.779875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:28.081 [2024-12-14 12:46:27.779882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:28.081 [2024-12-14 12:46:27.779890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:28.081 [2024-12-14 12:46:27.779896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:28.081 [2024-12-14 12:46:27.779997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 257.372 ms, result 0 00:20:29.022 00:20:29.022 00:20:29.022 12:46:28 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78438 00:20:29.022 12:46:28 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:29.022 12:46:28 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78438 00:20:29.022 12:46:28 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78438 ']' 00:20:29.022 12:46:28 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.022 12:46:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.022 12:46:28 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.022 12:46:28 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.022 12:46:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:29.022 [2024-12-14 12:46:28.515858] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:29.022 [2024-12-14 12:46:28.516197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78438 ] 00:20:29.022 [2024-12-14 12:46:28.676351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.283 [2024-12-14 12:46:28.761007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.854 12:46:29 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.854 12:46:29 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:29.854 12:46:29 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:29.854 [2024-12-14 12:46:29.550041] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.854 [2024-12-14 12:46:29.550094] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:30.116 [2024-12-14 12:46:29.718122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.116 [2024-12-14 12:46:29.718155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:30.116 [2024-12-14 12:46:29.718166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:30.116 [2024-12-14 12:46:29.718173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.116 [2024-12-14 12:46:29.720213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.116 [2024-12-14 12:46:29.720241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:30.116 [2024-12-14 12:46:29.720250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.024 ms 00:20:30.116 [2024-12-14 12:46:29.720256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.116 [2024-12-14 12:46:29.720313] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:30.116 [2024-12-14 12:46:29.721034] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:30.116 [2024-12-14 12:46:29.721085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.116 [2024-12-14 12:46:29.721093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:30.116 [2024-12-14 12:46:29.721102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:20:30.116 [2024-12-14 12:46:29.721108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.116 [2024-12-14 12:46:29.722253] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:30.116 [2024-12-14 12:46:29.731796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.116 [2024-12-14 12:46:29.731976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:30.116 [2024-12-14 12:46:29.731989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.545 ms 00:20:30.116 [2024-12-14 12:46:29.731996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.116 [2024-12-14 12:46:29.732072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.116 [2024-12-14 12:46:29.732083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:30.116 [2024-12-14 12:46:29.732089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:30.116 [2024-12-14 12:46:29.732096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.116 [2024-12-14 12:46:29.736362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.116 [2024-12-14 12:46:29.736390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:30.116 [2024-12-14 12:46:29.736398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.229 ms 00:20:30.116 [2024-12-14 12:46:29.736404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.116 [2024-12-14 12:46:29.736477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.116 [2024-12-14 12:46:29.736486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:30.116 [2024-12-14 12:46:29.736492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:30.116 [2024-12-14 12:46:29.736502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.116 [2024-12-14 12:46:29.736519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.116 [2024-12-14 12:46:29.736526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:30.116 [2024-12-14 12:46:29.736532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:30.116 [2024-12-14 12:46:29.736538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.116 [2024-12-14 12:46:29.736556] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:30.116 [2024-12-14 12:46:29.739284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.116 [2024-12-14 12:46:29.739307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:30.116 [2024-12-14 12:46:29.739315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:20:30.116 [2024-12-14 12:46:29.739321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.116 [2024-12-14 12:46:29.739350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.116 [2024-12-14 12:46:29.739357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:30.116 [2024-12-14 12:46:29.739364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:30.116 [2024-12-14 12:46:29.739371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.116 [2024-12-14 12:46:29.739387] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:30.116 [2024-12-14 12:46:29.739402] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:30.116 [2024-12-14 12:46:29.739434] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:30.116 [2024-12-14 12:46:29.739446] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:30.116 [2024-12-14 12:46:29.739525] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:30.116 [2024-12-14 12:46:29.739533] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:30.116 [2024-12-14 12:46:29.739544] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:30.116 [2024-12-14 12:46:29.739552] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:30.116 [2024-12-14 12:46:29.739559] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:30.116 [2024-12-14 12:46:29.739566] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:30.116 [2024-12-14 12:46:29.739572] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:30.116 [2024-12-14 12:46:29.739578] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:30.116 [2024-12-14 12:46:29.739586] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:30.116 [2024-12-14 12:46:29.739592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.116 [2024-12-14 12:46:29.739599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:30.116 [2024-12-14 12:46:29.739604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:20:30.116 [2024-12-14 12:46:29.739611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.116 [2024-12-14 12:46:29.739678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.116 [2024-12-14 12:46:29.739685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:30.116 [2024-12-14 12:46:29.739691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:30.116 [2024-12-14 12:46:29.739697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.116 [2024-12-14 12:46:29.739773] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:30.116 [2024-12-14 12:46:29.739781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:30.116 [2024-12-14 12:46:29.739787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:30.116 [2024-12-14 12:46:29.739794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.116 [2024-12-14 12:46:29.739800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:30.116 [2024-12-14 12:46:29.739807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:30.116 [2024-12-14 12:46:29.739812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:30.116 [2024-12-14 12:46:29.739821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:30.116 [2024-12-14 12:46:29.739826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:30.116 [2024-12-14 12:46:29.739832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:30.116 [2024-12-14 12:46:29.739837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:30.116 [2024-12-14 12:46:29.739844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:30.116 [2024-12-14 12:46:29.739850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:30.116 [2024-12-14 12:46:29.739856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:30.116 [2024-12-14 12:46:29.739861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:30.116 [2024-12-14 12:46:29.739868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.116 [2024-12-14 12:46:29.739873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:30.117 [2024-12-14 12:46:29.739880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:30.117 [2024-12-14 12:46:29.739888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.117 [2024-12-14 12:46:29.739894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:30.117 [2024-12-14 12:46:29.739899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:30.117 [2024-12-14 12:46:29.739905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.117 [2024-12-14 12:46:29.739911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:30.117 [2024-12-14 12:46:29.739918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:30.117 [2024-12-14 12:46:29.739922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.117 [2024-12-14 12:46:29.739928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:30.117 [2024-12-14 12:46:29.739933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:30.117 [2024-12-14 12:46:29.739939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.117 [2024-12-14 12:46:29.739944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:30.117 [2024-12-14 12:46:29.739951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:30.117 [2024-12-14 12:46:29.739956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:30.117 [2024-12-14 12:46:29.739963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:30.117 [2024-12-14 12:46:29.739967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:30.117 [2024-12-14 12:46:29.739973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:30.117 [2024-12-14 12:46:29.739978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:30.117 [2024-12-14 12:46:29.739984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:30.117 [2024-12-14 12:46:29.739988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:30.117 [2024-12-14 12:46:29.739994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:30.117 [2024-12-14 12:46:29.739999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:30.117 [2024-12-14 12:46:29.740007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.117 [2024-12-14 12:46:29.740011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:30.117 [2024-12-14 12:46:29.740018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:30.117 [2024-12-14 12:46:29.740023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.117 [2024-12-14 12:46:29.740030] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:30.117 [2024-12-14 12:46:29.740037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:30.117 [2024-12-14 12:46:29.740044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:30.117 [2024-12-14 12:46:29.740050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:30.117 [2024-12-14 12:46:29.740065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:30.117 [2024-12-14 12:46:29.740071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:30.117 [2024-12-14 12:46:29.740077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:30.117 [2024-12-14 12:46:29.740083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:30.117 [2024-12-14 12:46:29.740089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:30.117 [2024-12-14 12:46:29.740094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:30.117 [2024-12-14 12:46:29.740101] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:30.117 [2024-12-14 12:46:29.740108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:30.117 [2024-12-14 12:46:29.740118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:30.117 [2024-12-14 12:46:29.740125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:30.117 [2024-12-14 12:46:29.740132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:30.117 [2024-12-14 12:46:29.740142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:30.117 [2024-12-14 12:46:29.740149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:30.117 [2024-12-14 12:46:29.740154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:30.117 [2024-12-14 12:46:29.740160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:30.117 [2024-12-14 12:46:29.740166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:30.117 [2024-12-14 12:46:29.740173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:30.117 [2024-12-14 12:46:29.740178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:30.117 [2024-12-14 12:46:29.740184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:30.117 [2024-12-14 12:46:29.740190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:30.117 [2024-12-14 12:46:29.740196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:30.117 [2024-12-14 12:46:29.740201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:30.117 [2024-12-14 12:46:29.740207] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:30.117 [2024-12-14 12:46:29.740213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:30.117 [2024-12-14 12:46:29.740221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:30.117 [2024-12-14 12:46:29.740227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:30.117 [2024-12-14 12:46:29.740234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:30.117 [2024-12-14 12:46:29.740239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:30.117 [2024-12-14 12:46:29.740246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.117 [2024-12-14 12:46:29.740252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:30.117 [2024-12-14 12:46:29.740259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:20:30.117 [2024-12-14 12:46:29.740266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.117 [2024-12-14 12:46:29.760868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.117 [2024-12-14 12:46:29.760895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:30.117 [2024-12-14 12:46:29.760904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.558 ms 00:20:30.117 [2024-12-14 12:46:29.760912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.117 [2024-12-14 12:46:29.761001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.117 [2024-12-14 12:46:29.761009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:30.117 [2024-12-14 12:46:29.761016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:30.117 [2024-12-14 12:46:29.761021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.117 [2024-12-14 12:46:29.784706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.117 [2024-12-14 12:46:29.784733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:30.117 [2024-12-14 12:46:29.784742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.667 ms 00:20:30.117 [2024-12-14 12:46:29.784748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.117 [2024-12-14 12:46:29.784792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.117 [2024-12-14 12:46:29.784799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:30.117 [2024-12-14 12:46:29.784807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:30.117 [2024-12-14 12:46:29.784812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.117 [2024-12-14 12:46:29.785103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.117 [2024-12-14 12:46:29.785114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:30.117 [2024-12-14 12:46:29.785123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:20:30.117 [2024-12-14 12:46:29.785129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.117 [2024-12-14 12:46:29.785229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.117 [2024-12-14 12:46:29.785235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:30.117 [2024-12-14 12:46:29.785243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:30.117 [2024-12-14 12:46:29.785249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.117 [2024-12-14 12:46:29.796722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.117 [2024-12-14 12:46:29.796746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:30.117 [2024-12-14 12:46:29.796755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.456 ms 00:20:30.117 [2024-12-14 12:46:29.796760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.117 [2024-12-14 12:46:29.821601] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:30.117 [2024-12-14 12:46:29.821652] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:30.117 [2024-12-14 12:46:29.821672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.117 [2024-12-14 12:46:29.821683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:30.117 [2024-12-14 12:46:29.821697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.836 ms 00:20:30.117 [2024-12-14 12:46:29.821712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.117 [2024-12-14 12:46:29.841370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.117 [2024-12-14 12:46:29.841397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:30.117 [2024-12-14 12:46:29.841408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.574 ms 00:20:30.117 [2024-12-14 12:46:29.841414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.117 [2024-12-14 12:46:29.850164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.117 [2024-12-14 12:46:29.850187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:30.117 [2024-12-14 12:46:29.850197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.694 ms 00:20:30.118 [2024-12-14 12:46:29.850203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.380 [2024-12-14 12:46:29.858752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.380 [2024-12-14 12:46:29.858774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:30.380 [2024-12-14 12:46:29.858782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.508 ms 00:20:30.380 [2024-12-14 12:46:29.858788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.380 [2024-12-14 12:46:29.859251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.380 [2024-12-14 12:46:29.859296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:30.380 [2024-12-14 12:46:29.859308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:20:30.380 [2024-12-14 12:46:29.859313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.380 [2024-12-14 12:46:29.902364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.380 [2024-12-14 12:46:29.902505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:30.380 [2024-12-14 12:46:29.902524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.030 ms 00:20:30.380 [2024-12-14 12:46:29.902530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.380 [2024-12-14 12:46:29.910257] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:30.380 [2024-12-14 12:46:29.921358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.380 [2024-12-14 12:46:29.921471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:30.380 [2024-12-14 12:46:29.921485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.774 ms 00:20:30.380 [2024-12-14 12:46:29.921493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.380 [2024-12-14 12:46:29.921562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.380 [2024-12-14 12:46:29.921572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:30.380 [2024-12-14 12:46:29.921579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:30.380 [2024-12-14 12:46:29.921586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.380 [2024-12-14 12:46:29.921638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.380 [2024-12-14 12:46:29.921647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:30.380 [2024-12-14 12:46:29.921654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:30.380 [2024-12-14 12:46:29.921662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.380 [2024-12-14 12:46:29.921680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.380 [2024-12-14 12:46:29.921688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:30.380 [2024-12-14 12:46:29.921694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:30.380 [2024-12-14 12:46:29.921702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.380 [2024-12-14 12:46:29.921727] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:30.380 [2024-12-14 12:46:29.921737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.380 [2024-12-14 12:46:29.921744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:30.380 [2024-12-14 12:46:29.921752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:30.380 [2024-12-14 12:46:29.921757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.380 [2024-12-14 12:46:29.939104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.380 [2024-12-14 12:46:29.939129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:30.380 [2024-12-14 12:46:29.939139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.327 ms 00:20:30.380 [2024-12-14 12:46:29.939145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.380 [2024-12-14 12:46:29.939214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.380 [2024-12-14 12:46:29.939222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:30.380 [2024-12-14 12:46:29.939229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:30.380 [2024-12-14 12:46:29.939237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.380 [2024-12-14 12:46:29.939838] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:30.380 [2024-12-14 12:46:29.942036] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.508 ms, result 0 00:20:30.380 [2024-12-14 12:46:29.942979] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:30.380 Some configs were skipped because the RPC state that can call them passed over. 00:20:30.380 12:46:29 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:30.640 [2024-12-14 12:46:30.171329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.640 [2024-12-14 12:46:30.171442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:30.640 [2024-12-14 12:46:30.171487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.639 ms 00:20:30.641 [2024-12-14 12:46:30.171508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.641 [2024-12-14 12:46:30.171545] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.855 ms, result 0 00:20:30.641 true 00:20:30.641 12:46:30 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:30.641 [2024-12-14 12:46:30.366846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.641 [2024-12-14 12:46:30.366930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:30.641 [2024-12-14 12:46:30.366971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:20:30.641 [2024-12-14 12:46:30.366989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.641 [2024-12-14 12:46:30.367027] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.127 ms, result 0 00:20:30.641 true 00:20:30.901 12:46:30 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78438 00:20:30.901 12:46:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78438 ']' 00:20:30.901 12:46:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78438 00:20:30.901 12:46:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:30.901 12:46:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.901 12:46:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78438 00:20:30.901 killing process with pid 78438 00:20:30.901 12:46:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.901 12:46:30 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.901 12:46:30 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78438' 00:20:30.901 12:46:30 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78438 00:20:30.901 12:46:30 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78438 00:20:31.475 [2024-12-14 12:46:30.936346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.475 [2024-12-14 12:46:30.936390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:31.475 [2024-12-14 12:46:30.936401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:31.475 [2024-12-14 12:46:30.936409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.475 [2024-12-14 12:46:30.936429] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:31.475 [2024-12-14 12:46:30.938534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.475 [2024-12-14 12:46:30.938558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:31.475 [2024-12-14 12:46:30.938571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.092 ms 00:20:31.475 [2024-12-14 12:46:30.938577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.475 [2024-12-14 12:46:30.938797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.475 [2024-12-14 12:46:30.938805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:31.475 [2024-12-14 12:46:30.938813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:20:31.475 [2024-12-14 12:46:30.938818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.475 [2024-12-14 12:46:30.942066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.475 [2024-12-14 12:46:30.942089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:31.475 [2024-12-14 12:46:30.942104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:20:31.475 [2024-12-14 12:46:30.942109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.475 [2024-12-14 12:46:30.947302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.475 [2024-12-14 12:46:30.947418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:31.475 [2024-12-14 12:46:30.947436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.163 ms 00:20:31.475 [2024-12-14 12:46:30.947443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.475 [2024-12-14 12:46:30.955214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.475 [2024-12-14 12:46:30.955301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:31.475 [2024-12-14 12:46:30.955351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.727 ms 00:20:31.475 [2024-12-14 12:46:30.955369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.475 [2024-12-14 12:46:30.962005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.475 [2024-12-14 12:46:30.962105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:31.475 [2024-12-14 12:46:30.962156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.599 ms 00:20:31.475 [2024-12-14 12:46:30.962173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.475 [2024-12-14 12:46:30.962288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.475 [2024-12-14 12:46:30.962308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:31.475 [2024-12-14 12:46:30.962324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:31.475 [2024-12-14 12:46:30.962367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.475 [2024-12-14 12:46:30.970001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.475 [2024-12-14 12:46:30.970097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:31.475 [2024-12-14 12:46:30.970147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.605 ms 00:20:31.475 [2024-12-14 12:46:30.970164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.475 [2024-12-14 12:46:30.977478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.475 [2024-12-14 12:46:30.977566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:31.475 [2024-12-14 12:46:30.977669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.277 ms 00:20:31.475 [2024-12-14 12:46:30.977692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.475 [2024-12-14 12:46:30.984626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.475 [2024-12-14 12:46:30.984703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:31.475 [2024-12-14 12:46:30.984743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.885 ms 00:20:31.475 [2024-12-14 12:46:30.984759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.475 [2024-12-14 12:46:30.991618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.475 [2024-12-14 12:46:30.991696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:31.475 [2024-12-14 12:46:30.991736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.803 ms 00:20:31.475 [2024-12-14 12:46:30.991753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.475 [2024-12-14 12:46:30.991793] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:31.475 [2024-12-14 12:46:30.991815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:31.475 [2024-12-14 12:46:30.991869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:31.475 [2024-12-14 12:46:30.991895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:31.475 [2024-12-14 12:46:30.991917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.991959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.991988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.992999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.993979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:31.476 [2024-12-14 12:46:30.994749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:31.477 [2024-12-14 12:46:30.994770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:31.477 [2024-12-14 12:46:30.994860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:31.477 [2024-12-14 12:46:30.994881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:31.477 [2024-12-14 12:46:30.994903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:31.477 [2024-12-14 12:46:30.994936] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:31.477 [2024-12-14 12:46:30.994956] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 000400a1-d0eb-4378-a146-f83adc96f65b 00:20:31.477 [2024-12-14 12:46:30.994981] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:31.477 [2024-12-14 12:46:30.995020] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:31.477 [2024-12-14 12:46:30.995066] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:31.477 [2024-12-14 12:46:30.995086] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:31.477 [2024-12-14 12:46:30.995117] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:31.477 [2024-12-14 12:46:30.995135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:31.477 [2024-12-14 12:46:30.995150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:31.477 [2024-12-14 12:46:30.995165] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:31.477 [2024-12-14 12:46:30.995179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:31.477 [2024-12-14 12:46:30.995194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.477 [2024-12-14 12:46:30.995209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:31.477 [2024-12-14 12:46:30.995226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.402 ms 00:20:31.477 [2024-12-14 12:46:30.995294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.004835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.477 [2024-12-14 12:46:31.004914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:31.477 [2024-12-14 12:46:31.004957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.489 ms 00:20:31.477 [2024-12-14 12:46:31.004979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.005302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.477 [2024-12-14 12:46:31.005366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:31.477 [2024-12-14 12:46:31.005434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:20:31.477 [2024-12-14 12:46:31.005453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.040199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.477 [2024-12-14 12:46:31.040288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.477 [2024-12-14 12:46:31.040328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.477 [2024-12-14 12:46:31.040346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.040426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.477 [2024-12-14 12:46:31.040445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.477 [2024-12-14 12:46:31.040464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.477 [2024-12-14 12:46:31.040478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.040521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.477 [2024-12-14 12:46:31.040539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.477 [2024-12-14 12:46:31.040557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.477 [2024-12-14 12:46:31.040608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.040637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.477 [2024-12-14 12:46:31.040674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.477 [2024-12-14 12:46:31.040699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.477 [2024-12-14 12:46:31.040707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.100226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.477 [2024-12-14 12:46:31.100258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.477 [2024-12-14 12:46:31.100269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.477 [2024-12-14 12:46:31.100275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.148626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.477 [2024-12-14 12:46:31.148657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.477 [2024-12-14 12:46:31.148666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.477 [2024-12-14 12:46:31.148675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.148731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.477 [2024-12-14 12:46:31.148738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.477 [2024-12-14 12:46:31.148748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.477 [2024-12-14 12:46:31.148754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.148779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.477 [2024-12-14 12:46:31.148786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.477 [2024-12-14 12:46:31.148793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.477 [2024-12-14 12:46:31.148799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.148871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.477 [2024-12-14 12:46:31.148879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.477 [2024-12-14 12:46:31.148886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.477 [2024-12-14 12:46:31.148892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.148918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.477 [2024-12-14 12:46:31.148924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:31.477 [2024-12-14 12:46:31.148931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.477 [2024-12-14 12:46:31.148937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.148969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.477 [2024-12-14 12:46:31.148976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.477 [2024-12-14 12:46:31.148985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.477 [2024-12-14 12:46:31.148991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.149024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.477 [2024-12-14 12:46:31.149031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.477 [2024-12-14 12:46:31.149039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.477 [2024-12-14 12:46:31.149044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.477 [2024-12-14 12:46:31.149169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 212.803 ms, result 0 00:20:32.050 12:46:31 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:32.050 12:46:31 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:32.050 [2024-12-14 12:46:31.744030] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:32.050 [2024-12-14 12:46:31.744169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78491 ] 00:20:32.311 [2024-12-14 12:46:31.899811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:32.311 [2024-12-14 12:46:31.979280] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.571 [2024-12-14 12:46:32.187483] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:32.571 [2024-12-14 12:46:32.187534] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:32.833 [2024-12-14 12:46:32.335341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.833 [2024-12-14 12:46:32.335376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:32.833 [2024-12-14 12:46:32.335386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:32.833 [2024-12-14 12:46:32.335392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.833 [2024-12-14 12:46:32.337433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.833 [2024-12-14 12:46:32.337462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:32.833 [2024-12-14 12:46:32.337470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.029 ms 00:20:32.833 [2024-12-14 12:46:32.337475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.833 [2024-12-14 12:46:32.337531] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:32.833 [2024-12-14 12:46:32.338049] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:32.833 [2024-12-14 12:46:32.338074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.833 [2024-12-14 12:46:32.338080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:32.833 [2024-12-14 12:46:32.338087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:20:32.833 [2024-12-14 12:46:32.338092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.833 [2024-12-14 12:46:32.339155] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:32.833 [2024-12-14 12:46:32.348645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.833 [2024-12-14 12:46:32.348671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:32.834 [2024-12-14 12:46:32.348679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.491 ms 00:20:32.834 [2024-12-14 12:46:32.348685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.834 [2024-12-14 12:46:32.348747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.834 [2024-12-14 12:46:32.348755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:32.834 [2024-12-14 12:46:32.348762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:32.834 [2024-12-14 12:46:32.348767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.834 [2024-12-14 12:46:32.353073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.834 [2024-12-14 12:46:32.353095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:32.834 [2024-12-14 12:46:32.353102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.277 ms 00:20:32.834 [2024-12-14 12:46:32.353108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.834 [2024-12-14 12:46:32.353179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.834 [2024-12-14 12:46:32.353186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:32.834 [2024-12-14 12:46:32.353192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:32.834 [2024-12-14 12:46:32.353198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.834 [2024-12-14 12:46:32.353217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.834 [2024-12-14 12:46:32.353223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:32.834 [2024-12-14 12:46:32.353229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:32.834 [2024-12-14 12:46:32.353234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.834 [2024-12-14 12:46:32.353251] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:32.834 [2024-12-14 12:46:32.355883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.834 [2024-12-14 12:46:32.355904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:32.834 [2024-12-14 12:46:32.355911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.635 ms 00:20:32.834 [2024-12-14 12:46:32.355917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.834 [2024-12-14 12:46:32.355945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.834 [2024-12-14 12:46:32.355952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:32.834 [2024-12-14 12:46:32.355958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:32.834 [2024-12-14 12:46:32.355963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.834 [2024-12-14 12:46:32.355977] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:32.834 [2024-12-14 12:46:32.355991] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:32.834 [2024-12-14 12:46:32.356017] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:32.834 [2024-12-14 12:46:32.356028] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:32.834 [2024-12-14 12:46:32.356113] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:32.834 [2024-12-14 12:46:32.356121] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:32.834 [2024-12-14 12:46:32.356129] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:32.834 [2024-12-14 12:46:32.356138] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:32.834 [2024-12-14 12:46:32.356145] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:32.834 [2024-12-14 12:46:32.356151] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:32.834 [2024-12-14 12:46:32.356157] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:32.834 [2024-12-14 12:46:32.356162] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:32.834 [2024-12-14 12:46:32.356167] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:32.834 [2024-12-14 12:46:32.356173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.834 [2024-12-14 12:46:32.356178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:32.834 [2024-12-14 12:46:32.356184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:20:32.834 [2024-12-14 12:46:32.356189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.834 [2024-12-14 12:46:32.356255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.834 [2024-12-14 12:46:32.356263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:32.834 [2024-12-14 12:46:32.356269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:32.834 [2024-12-14 12:46:32.356274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.834 [2024-12-14 12:46:32.356347] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:32.834 [2024-12-14 12:46:32.356354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:32.834 [2024-12-14 12:46:32.356360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:32.834 [2024-12-14 12:46:32.356366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.834 [2024-12-14 12:46:32.356371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:32.834 [2024-12-14 12:46:32.356377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:32.834 [2024-12-14 12:46:32.356382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:32.834 [2024-12-14 12:46:32.356388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:32.834 [2024-12-14 12:46:32.356393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:32.834 [2024-12-14 12:46:32.356398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:32.834 [2024-12-14 12:46:32.356403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:32.834 [2024-12-14 12:46:32.356413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:32.834 [2024-12-14 12:46:32.356418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:32.834 [2024-12-14 12:46:32.356424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:32.834 [2024-12-14 12:46:32.356430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:32.834 [2024-12-14 12:46:32.356435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.834 [2024-12-14 12:46:32.356440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:32.834 [2024-12-14 12:46:32.356444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:32.834 [2024-12-14 12:46:32.356449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.834 [2024-12-14 12:46:32.356454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:32.834 [2024-12-14 12:46:32.356459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:32.834 [2024-12-14 12:46:32.356464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.834 [2024-12-14 12:46:32.356469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:32.834 [2024-12-14 12:46:32.356474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:32.834 [2024-12-14 12:46:32.356479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.834 [2024-12-14 12:46:32.356484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:32.834 [2024-12-14 12:46:32.356489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:32.834 [2024-12-14 12:46:32.356494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.834 [2024-12-14 12:46:32.356499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:32.834 [2024-12-14 12:46:32.356504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:32.834 [2024-12-14 12:46:32.356509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.834 [2024-12-14 12:46:32.356514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:32.834 [2024-12-14 12:46:32.356519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:32.834 [2024-12-14 12:46:32.356524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:32.834 [2024-12-14 12:46:32.356529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:32.834 [2024-12-14 12:46:32.356534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:32.834 [2024-12-14 12:46:32.356539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:32.834 [2024-12-14 12:46:32.356544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:32.834 [2024-12-14 12:46:32.356549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:32.834 [2024-12-14 12:46:32.356553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.834 [2024-12-14 12:46:32.356558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:32.834 [2024-12-14 12:46:32.356563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:32.834 [2024-12-14 12:46:32.356568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.834 [2024-12-14 12:46:32.356573] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:32.834 [2024-12-14 12:46:32.356579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:32.834 [2024-12-14 12:46:32.356586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:32.834 [2024-12-14 12:46:32.356592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.834 [2024-12-14 12:46:32.356598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:32.834 [2024-12-14 12:46:32.356603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:32.834 [2024-12-14 12:46:32.356608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:32.834 [2024-12-14 12:46:32.356613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:32.834 [2024-12-14 12:46:32.356618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:32.834 [2024-12-14 12:46:32.356623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:32.834 [2024-12-14 12:46:32.356629] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:32.834 [2024-12-14 12:46:32.356635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:32.834 [2024-12-14 12:46:32.356641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:32.834 [2024-12-14 12:46:32.356647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:32.834 [2024-12-14 12:46:32.356652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:32.835 [2024-12-14 12:46:32.356658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:32.835 [2024-12-14 12:46:32.356663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:32.835 [2024-12-14 12:46:32.356669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:32.835 [2024-12-14 12:46:32.356674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:32.835 [2024-12-14 12:46:32.356679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:32.835 [2024-12-14 12:46:32.356684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:32.835 [2024-12-14 12:46:32.356689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:32.835 [2024-12-14 12:46:32.356696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:32.835 [2024-12-14 12:46:32.356701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:32.835 [2024-12-14 12:46:32.356707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:32.835 [2024-12-14 12:46:32.356712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:32.835 [2024-12-14 12:46:32.356717] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:32.835 [2024-12-14 12:46:32.356723] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:32.835 [2024-12-14 12:46:32.356730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:32.835 [2024-12-14 12:46:32.356735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:32.835 [2024-12-14 12:46:32.356741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:32.835 [2024-12-14 12:46:32.356746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:32.835 [2024-12-14 12:46:32.356751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.356759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:32.835 [2024-12-14 12:46:32.356765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:20:32.835 [2024-12-14 12:46:32.356770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.377423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.377546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:32.835 [2024-12-14 12:46:32.377559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.615 ms 00:20:32.835 [2024-12-14 12:46:32.377565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.377670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.377679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:32.835 [2024-12-14 12:46:32.377686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:32.835 [2024-12-14 12:46:32.377691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.416775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.416804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:32.835 [2024-12-14 12:46:32.416815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.068 ms 00:20:32.835 [2024-12-14 12:46:32.416821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.416878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.416887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.835 [2024-12-14 12:46:32.416894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:32.835 [2024-12-14 12:46:32.416899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.417192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.417204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.835 [2024-12-14 12:46:32.417211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:20:32.835 [2024-12-14 12:46:32.417222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.417323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.417330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.835 [2024-12-14 12:46:32.417336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:32.835 [2024-12-14 12:46:32.417342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.427938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.428050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.835 [2024-12-14 12:46:32.428079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.581 ms 00:20:32.835 [2024-12-14 12:46:32.428085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.437709] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:32.835 [2024-12-14 12:46:32.437734] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:32.835 [2024-12-14 12:46:32.437744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.437750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:32.835 [2024-12-14 12:46:32.437756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.568 ms 00:20:32.835 [2024-12-14 12:46:32.437762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.456105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.456132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:32.835 [2024-12-14 12:46:32.456141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.298 ms 00:20:32.835 [2024-12-14 12:46:32.456147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.465077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.465100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:32.835 [2024-12-14 12:46:32.465107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.878 ms 00:20:32.835 [2024-12-14 12:46:32.465112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.473608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.473642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:32.835 [2024-12-14 12:46:32.473650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.457 ms 00:20:32.835 [2024-12-14 12:46:32.473656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.474120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.474167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:32.835 [2024-12-14 12:46:32.474174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:20:32.835 [2024-12-14 12:46:32.474180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.517236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.517269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:32.835 [2024-12-14 12:46:32.517278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.039 ms 00:20:32.835 [2024-12-14 12:46:32.517284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.525045] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:32.835 [2024-12-14 12:46:32.536208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.536233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:32.835 [2024-12-14 12:46:32.536241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.866 ms 00:20:32.835 [2024-12-14 12:46:32.536251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.536318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.536326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:32.835 [2024-12-14 12:46:32.536333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:32.835 [2024-12-14 12:46:32.536338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.536373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.536379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:32.835 [2024-12-14 12:46:32.536385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:32.835 [2024-12-14 12:46:32.536393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.536416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.536422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:32.835 [2024-12-14 12:46:32.536428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:32.835 [2024-12-14 12:46:32.536434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.536458] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:32.835 [2024-12-14 12:46:32.536465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.536471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:32.835 [2024-12-14 12:46:32.536477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:32.835 [2024-12-14 12:46:32.536482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.554137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.554162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:32.835 [2024-12-14 12:46:32.554170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.640 ms 00:20:32.835 [2024-12-14 12:46:32.554177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.554243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.835 [2024-12-14 12:46:32.554250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:32.835 [2024-12-14 12:46:32.554257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:32.835 [2024-12-14 12:46:32.554263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.835 [2024-12-14 12:46:32.554871] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:32.836 [2024-12-14 12:46:32.557151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 219.319 ms, result 0 00:20:32.836 [2024-12-14 12:46:32.557685] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:33.096 [2024-12-14 12:46:32.572368] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:34.040  [2024-12-14T12:46:34.722Z] Copying: 29/256 [MB] (29 MBps) [2024-12-14T12:46:35.663Z] Copying: 41/256 [MB] (11 MBps) [2024-12-14T12:46:36.606Z] Copying: 51/256 [MB] (10 MBps) [2024-12-14T12:46:37.993Z] Copying: 66/256 [MB] (15 MBps) [2024-12-14T12:46:38.936Z] Copying: 77/256 [MB] (10 MBps) [2024-12-14T12:46:39.880Z] Copying: 87/256 [MB] (10 MBps) [2024-12-14T12:46:40.825Z] Copying: 98/256 [MB] (10 MBps) [2024-12-14T12:46:41.833Z] Copying: 114/256 [MB] (15 MBps) [2024-12-14T12:46:42.822Z] Copying: 130/256 [MB] (15 MBps) [2024-12-14T12:46:43.765Z] Copying: 144/256 [MB] (14 MBps) [2024-12-14T12:46:44.709Z] Copying: 169/256 [MB] (24 MBps) [2024-12-14T12:46:45.656Z] Copying: 190/256 [MB] (20 MBps) [2024-12-14T12:46:46.597Z] Copying: 212/256 [MB] (21 MBps) [2024-12-14T12:46:47.985Z] Copying: 227/256 [MB] (15 MBps) [2024-12-14T12:46:48.559Z] Copying: 242/256 [MB] (14 MBps) [2024-12-14T12:46:48.559Z] Copying: 256/256 [MB] (average 16 MBps)[2024-12-14 12:46:48.316620] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:48.822 [2024-12-14 12:46:48.326626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.822 [2024-12-14 12:46:48.326675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:48.822 [2024-12-14 12:46:48.326695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:48.822 [2024-12-14 12:46:48.326704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.822 [2024-12-14 12:46:48.326728] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:48.822 [2024-12-14 12:46:48.329634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.822 [2024-12-14 12:46:48.329675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:48.822 [2024-12-14 12:46:48.329686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.891 ms 00:20:48.822 [2024-12-14 12:46:48.329695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.822 [2024-12-14 12:46:48.329961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.822 [2024-12-14 12:46:48.329971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:48.822 [2024-12-14 12:46:48.329979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:20:48.822 [2024-12-14 12:46:48.329987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.822 [2024-12-14 12:46:48.333730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.822 [2024-12-14 12:46:48.333897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:48.822 [2024-12-14 12:46:48.333912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.724 ms 00:20:48.822 [2024-12-14 12:46:48.333921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.822 [2024-12-14 12:46:48.340768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.822 [2024-12-14 12:46:48.340944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:48.822 [2024-12-14 12:46:48.340963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.819 ms 00:20:48.822 [2024-12-14 12:46:48.340971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.822 [2024-12-14 12:46:48.366695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.822 [2024-12-14 12:46:48.366743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:48.822 [2024-12-14 12:46:48.366757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.642 ms 00:20:48.822 [2024-12-14 12:46:48.366765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.822 [2024-12-14 12:46:48.382528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.822 [2024-12-14 12:46:48.382716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:48.822 [2024-12-14 12:46:48.382747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.711 ms 00:20:48.822 [2024-12-14 12:46:48.382755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.822 [2024-12-14 12:46:48.382945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.822 [2024-12-14 12:46:48.382958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:48.822 [2024-12-14 12:46:48.382976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:48.822 [2024-12-14 12:46:48.382985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.822 [2024-12-14 12:46:48.409271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.822 [2024-12-14 12:46:48.409453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:48.822 [2024-12-14 12:46:48.409473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.269 ms 00:20:48.822 [2024-12-14 12:46:48.409480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.822 [2024-12-14 12:46:48.435252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.822 [2024-12-14 12:46:48.435300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:48.822 [2024-12-14 12:46:48.435312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.671 ms 00:20:48.822 [2024-12-14 12:46:48.435319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.822 [2024-12-14 12:46:48.460049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.822 [2024-12-14 12:46:48.460235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:48.822 [2024-12-14 12:46:48.460254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.665 ms 00:20:48.822 [2024-12-14 12:46:48.460263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.822 [2024-12-14 12:46:48.484910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.822 [2024-12-14 12:46:48.484956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:48.822 [2024-12-14 12:46:48.484967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.517 ms 00:20:48.822 [2024-12-14 12:46:48.484974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.822 [2024-12-14 12:46:48.485023] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:48.822 [2024-12-14 12:46:48.485040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:48.822 [2024-12-14 12:46:48.485401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:48.823 [2024-12-14 12:46:48.485849] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:48.823 [2024-12-14 12:46:48.485857] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 000400a1-d0eb-4378-a146-f83adc96f65b 00:20:48.823 [2024-12-14 12:46:48.485866] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:48.823 [2024-12-14 12:46:48.485874] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:48.823 [2024-12-14 12:46:48.485881] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:48.823 [2024-12-14 12:46:48.485889] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:48.823 [2024-12-14 12:46:48.485897] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:48.823 [2024-12-14 12:46:48.485905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:48.823 [2024-12-14 12:46:48.485916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:48.823 [2024-12-14 12:46:48.485923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:48.823 [2024-12-14 12:46:48.485929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:48.823 [2024-12-14 12:46:48.485936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.823 [2024-12-14 12:46:48.485944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:48.823 [2024-12-14 12:46:48.485953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.914 ms 00:20:48.823 [2024-12-14 12:46:48.485960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.823 [2024-12-14 12:46:48.499774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.823 [2024-12-14 12:46:48.499952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:48.823 [2024-12-14 12:46:48.499969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.781 ms 00:20:48.823 [2024-12-14 12:46:48.499978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.823 [2024-12-14 12:46:48.500408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:48.823 [2024-12-14 12:46:48.500421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:48.823 [2024-12-14 12:46:48.500431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:20:48.823 [2024-12-14 12:46:48.500439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.823 [2024-12-14 12:46:48.539655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.823 [2024-12-14 12:46:48.539705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:48.823 [2024-12-14 12:46:48.539716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.823 [2024-12-14 12:46:48.539732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.823 [2024-12-14 12:46:48.539817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.823 [2024-12-14 12:46:48.539827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:48.823 [2024-12-14 12:46:48.539834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.823 [2024-12-14 12:46:48.539842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.823 [2024-12-14 12:46:48.539895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.823 [2024-12-14 12:46:48.539904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:48.823 [2024-12-14 12:46:48.539913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.823 [2024-12-14 12:46:48.539920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:48.823 [2024-12-14 12:46:48.539941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:48.823 [2024-12-14 12:46:48.539949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:48.823 [2024-12-14 12:46:48.539957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:48.823 [2024-12-14 12:46:48.539964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.085 [2024-12-14 12:46:48.625827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.085 [2024-12-14 12:46:48.625882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.085 [2024-12-14 12:46:48.625895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.085 [2024-12-14 12:46:48.625903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.085 [2024-12-14 12:46:48.696026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.085 [2024-12-14 12:46:48.696104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:49.085 [2024-12-14 12:46:48.696117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.085 [2024-12-14 12:46:48.696127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.085 [2024-12-14 12:46:48.696188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.085 [2024-12-14 12:46:48.696199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:49.085 [2024-12-14 12:46:48.696207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.085 [2024-12-14 12:46:48.696216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.085 [2024-12-14 12:46:48.696250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.085 [2024-12-14 12:46:48.696267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:49.085 [2024-12-14 12:46:48.696277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.085 [2024-12-14 12:46:48.696285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.085 [2024-12-14 12:46:48.696384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.085 [2024-12-14 12:46:48.696396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:49.085 [2024-12-14 12:46:48.696404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.085 [2024-12-14 12:46:48.696413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.085 [2024-12-14 12:46:48.696447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.085 [2024-12-14 12:46:48.696457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:49.085 [2024-12-14 12:46:48.696469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.085 [2024-12-14 12:46:48.696478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.085 [2024-12-14 12:46:48.696521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.085 [2024-12-14 12:46:48.696531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:49.085 [2024-12-14 12:46:48.696541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.085 [2024-12-14 12:46:48.696549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.085 [2024-12-14 12:46:48.696599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.085 [2024-12-14 12:46:48.696613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:49.085 [2024-12-14 12:46:48.696622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.085 [2024-12-14 12:46:48.696630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.085 [2024-12-14 12:46:48.696785] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.155 ms, result 0 00:20:50.028 00:20:50.028 00:20:50.028 12:46:49 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:50.028 12:46:49 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:50.601 12:46:50 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:50.601 [2024-12-14 12:46:50.153453] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:50.601 [2024-12-14 12:46:50.153636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78683 ] 00:20:50.601 [2024-12-14 12:46:50.318452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.862 [2024-12-14 12:46:50.438315] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.122 [2024-12-14 12:46:50.733220] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:51.122 [2024-12-14 12:46:50.733312] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:51.384 [2024-12-14 12:46:50.894227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.384 [2024-12-14 12:46:50.894289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:51.385 [2024-12-14 12:46:50.894305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:51.385 [2024-12-14 12:46:50.894314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.385 [2024-12-14 12:46:50.897336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.385 [2024-12-14 12:46:50.897383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:51.385 [2024-12-14 12:46:50.897394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.000 ms 00:20:51.385 [2024-12-14 12:46:50.897403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.385 [2024-12-14 12:46:50.897657] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:51.385 [2024-12-14 12:46:50.898423] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:51.385 [2024-12-14 12:46:50.898452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.385 [2024-12-14 12:46:50.898462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:51.385 [2024-12-14 12:46:50.898473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:20:51.385 [2024-12-14 12:46:50.898481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.385 [2024-12-14 12:46:50.900511] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:51.385 [2024-12-14 12:46:50.915194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.385 [2024-12-14 12:46:50.915391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:51.385 [2024-12-14 12:46:50.915591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.685 ms 00:20:51.385 [2024-12-14 12:46:50.915635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.385 [2024-12-14 12:46:50.915782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.385 [2024-12-14 12:46:50.916099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:51.385 [2024-12-14 12:46:50.916123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:51.385 [2024-12-14 12:46:50.916217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.385 [2024-12-14 12:46:50.925076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.385 [2024-12-14 12:46:50.925567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:51.385 [2024-12-14 12:46:50.925765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.771 ms 00:20:51.385 [2024-12-14 12:46:50.925796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.385 [2024-12-14 12:46:50.925974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.385 [2024-12-14 12:46:50.926151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:51.385 [2024-12-14 12:46:50.926169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:51.385 [2024-12-14 12:46:50.926178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.385 [2024-12-14 12:46:50.926230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.385 [2024-12-14 12:46:50.926240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:51.385 [2024-12-14 12:46:50.926249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:51.385 [2024-12-14 12:46:50.926257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.385 [2024-12-14 12:46:50.926284] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:51.385 [2024-12-14 12:46:50.930833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.385 [2024-12-14 12:46:50.931000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:51.385 [2024-12-14 12:46:50.931090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.556 ms 00:20:51.385 [2024-12-14 12:46:50.931115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.385 [2024-12-14 12:46:50.931205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.385 [2024-12-14 12:46:50.931230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:51.385 [2024-12-14 12:46:50.931252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:51.385 [2024-12-14 12:46:50.931271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.385 [2024-12-14 12:46:50.931375] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:51.385 [2024-12-14 12:46:50.931419] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:51.385 [2024-12-14 12:46:50.931479] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:51.385 [2024-12-14 12:46:50.931517] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:51.385 [2024-12-14 12:46:50.931646] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:51.385 [2024-12-14 12:46:50.931741] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:51.385 [2024-12-14 12:46:50.931775] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:51.385 [2024-12-14 12:46:50.931812] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:51.385 [2024-12-14 12:46:50.931842] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:51.385 [2024-12-14 12:46:50.931872] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:51.385 [2024-12-14 12:46:50.931890] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:51.385 [2024-12-14 12:46:50.931909] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:51.385 [2024-12-14 12:46:50.931928] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:51.385 [2024-12-14 12:46:50.931949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.385 [2024-12-14 12:46:50.931967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:51.385 [2024-12-14 12:46:50.931987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:20:51.385 [2024-12-14 12:46:50.932006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.385 [2024-12-14 12:46:50.932129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.385 [2024-12-14 12:46:50.932296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:51.385 [2024-12-14 12:46:50.932437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:51.385 [2024-12-14 12:46:50.932446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.385 [2024-12-14 12:46:50.932550] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:51.385 [2024-12-14 12:46:50.932561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:51.385 [2024-12-14 12:46:50.932569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:51.385 [2024-12-14 12:46:50.932577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.385 [2024-12-14 12:46:50.932585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:51.385 [2024-12-14 12:46:50.932592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:51.385 [2024-12-14 12:46:50.932598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:51.385 [2024-12-14 12:46:50.932606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:51.385 [2024-12-14 12:46:50.932613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:51.385 [2024-12-14 12:46:50.932620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:51.385 [2024-12-14 12:46:50.932627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:51.385 [2024-12-14 12:46:50.932643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:51.385 [2024-12-14 12:46:50.932649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:51.385 [2024-12-14 12:46:50.932656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:51.385 [2024-12-14 12:46:50.932663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:51.385 [2024-12-14 12:46:50.932669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.385 [2024-12-14 12:46:50.932676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:51.385 [2024-12-14 12:46:50.932683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:51.385 [2024-12-14 12:46:50.932689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.385 [2024-12-14 12:46:50.932696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:51.385 [2024-12-14 12:46:50.932703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:51.385 [2024-12-14 12:46:50.932709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.385 [2024-12-14 12:46:50.932715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:51.385 [2024-12-14 12:46:50.932721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:51.385 [2024-12-14 12:46:50.932728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.385 [2024-12-14 12:46:50.932735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:51.385 [2024-12-14 12:46:50.932741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:51.385 [2024-12-14 12:46:50.932747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.385 [2024-12-14 12:46:50.932753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:51.385 [2024-12-14 12:46:50.932760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:51.385 [2024-12-14 12:46:50.932766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:51.385 [2024-12-14 12:46:50.932772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:51.385 [2024-12-14 12:46:50.932780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:51.385 [2024-12-14 12:46:50.932790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:51.385 [2024-12-14 12:46:50.932797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:51.385 [2024-12-14 12:46:50.932805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:51.385 [2024-12-14 12:46:50.932812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:51.385 [2024-12-14 12:46:50.932819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:51.385 [2024-12-14 12:46:50.932825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:51.386 [2024-12-14 12:46:50.932832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.386 [2024-12-14 12:46:50.932838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:51.386 [2024-12-14 12:46:50.932844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:51.386 [2024-12-14 12:46:50.932851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.386 [2024-12-14 12:46:50.932858] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:51.386 [2024-12-14 12:46:50.932865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:51.386 [2024-12-14 12:46:50.932875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:51.386 [2024-12-14 12:46:50.932883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:51.386 [2024-12-14 12:46:50.932891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:51.386 [2024-12-14 12:46:50.932897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:51.386 [2024-12-14 12:46:50.932904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:51.386 [2024-12-14 12:46:50.932910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:51.386 [2024-12-14 12:46:50.932917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:51.386 [2024-12-14 12:46:50.932923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:51.386 [2024-12-14 12:46:50.932932] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:51.386 [2024-12-14 12:46:50.932942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:51.386 [2024-12-14 12:46:50.932952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:51.386 [2024-12-14 12:46:50.932959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:51.386 [2024-12-14 12:46:50.932966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:51.386 [2024-12-14 12:46:50.932973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:51.386 [2024-12-14 12:46:50.932980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:51.386 [2024-12-14 12:46:50.932987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:51.386 [2024-12-14 12:46:50.932993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:51.386 [2024-12-14 12:46:50.933000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:51.386 [2024-12-14 12:46:50.933008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:51.386 [2024-12-14 12:46:50.933016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:51.386 [2024-12-14 12:46:50.933024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:51.386 [2024-12-14 12:46:50.933031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:51.386 [2024-12-14 12:46:50.933038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:51.386 [2024-12-14 12:46:50.933046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:51.386 [2024-12-14 12:46:50.933074] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:51.386 [2024-12-14 12:46:50.933084] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:51.386 [2024-12-14 12:46:50.933093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:51.386 [2024-12-14 12:46:50.933101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:51.386 [2024-12-14 12:46:50.933109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:51.386 [2024-12-14 12:46:50.933116] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:51.386 [2024-12-14 12:46:50.933124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.386 [2024-12-14 12:46:50.933135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:51.386 [2024-12-14 12:46:50.933143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:20:51.386 [2024-12-14 12:46:50.933150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.386 [2024-12-14 12:46:50.965476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.386 [2024-12-14 12:46:50.965532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:51.386 [2024-12-14 12:46:50.965545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.264 ms 00:20:51.386 [2024-12-14 12:46:50.965554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.386 [2024-12-14 12:46:50.965727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.386 [2024-12-14 12:46:50.965741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:51.386 [2024-12-14 12:46:50.965751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:51.386 [2024-12-14 12:46:50.965760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.386 [2024-12-14 12:46:51.011903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.386 [2024-12-14 12:46:51.011961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:51.386 [2024-12-14 12:46:51.011978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.118 ms 00:20:51.386 [2024-12-14 12:46:51.011988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.386 [2024-12-14 12:46:51.012131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.386 [2024-12-14 12:46:51.012146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:51.386 [2024-12-14 12:46:51.012156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:51.386 [2024-12-14 12:46:51.012165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.386 [2024-12-14 12:46:51.012735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.386 [2024-12-14 12:46:51.012784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:51.386 [2024-12-14 12:46:51.012796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:20:51.386 [2024-12-14 12:46:51.012811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.386 [2024-12-14 12:46:51.012969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.386 [2024-12-14 12:46:51.012979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:51.386 [2024-12-14 12:46:51.012987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:20:51.386 [2024-12-14 12:46:51.012995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.386 [2024-12-14 12:46:51.029327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.386 [2024-12-14 12:46:51.029373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:51.386 [2024-12-14 12:46:51.029384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.308 ms 00:20:51.386 [2024-12-14 12:46:51.029393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.386 [2024-12-14 12:46:51.043807] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:51.386 [2024-12-14 12:46:51.043860] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:51.386 [2024-12-14 12:46:51.043874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.386 [2024-12-14 12:46:51.043883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:51.386 [2024-12-14 12:46:51.043893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.358 ms 00:20:51.386 [2024-12-14 12:46:51.043900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.386 [2024-12-14 12:46:51.069710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.386 [2024-12-14 12:46:51.069766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:51.386 [2024-12-14 12:46:51.069779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.709 ms 00:20:51.386 [2024-12-14 12:46:51.069787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.386 [2024-12-14 12:46:51.082812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.386 [2024-12-14 12:46:51.082859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:51.386 [2024-12-14 12:46:51.082872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.925 ms 00:20:51.386 [2024-12-14 12:46:51.082880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.386 [2024-12-14 12:46:51.095349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.386 [2024-12-14 12:46:51.095399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:51.386 [2024-12-14 12:46:51.095411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.376 ms 00:20:51.386 [2024-12-14 12:46:51.095419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.386 [2024-12-14 12:46:51.096105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.386 [2024-12-14 12:46:51.096142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:51.386 [2024-12-14 12:46:51.096152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:20:51.386 [2024-12-14 12:46:51.096160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.648 [2024-12-14 12:46:51.162624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.648 [2024-12-14 12:46:51.162693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:51.648 [2024-12-14 12:46:51.162711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.432 ms 00:20:51.648 [2024-12-14 12:46:51.162721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.648 [2024-12-14 12:46:51.173971] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:51.648 [2024-12-14 12:46:51.193161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.648 [2024-12-14 12:46:51.193217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:51.648 [2024-12-14 12:46:51.193232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.333 ms 00:20:51.648 [2024-12-14 12:46:51.193247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.648 [2024-12-14 12:46:51.193348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.648 [2024-12-14 12:46:51.193361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:51.648 [2024-12-14 12:46:51.193371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:51.648 [2024-12-14 12:46:51.193380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.648 [2024-12-14 12:46:51.193439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.648 [2024-12-14 12:46:51.193449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:51.648 [2024-12-14 12:46:51.193458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:51.648 [2024-12-14 12:46:51.193470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.648 [2024-12-14 12:46:51.193502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.648 [2024-12-14 12:46:51.193511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:51.649 [2024-12-14 12:46:51.193520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:51.649 [2024-12-14 12:46:51.193528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.649 [2024-12-14 12:46:51.193566] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:51.649 [2024-12-14 12:46:51.193578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.649 [2024-12-14 12:46:51.193586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:51.649 [2024-12-14 12:46:51.193594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:51.649 [2024-12-14 12:46:51.193630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.649 [2024-12-14 12:46:51.219697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.649 [2024-12-14 12:46:51.219753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:51.649 [2024-12-14 12:46:51.219767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.044 ms 00:20:51.649 [2024-12-14 12:46:51.219777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.649 [2024-12-14 12:46:51.219921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.649 [2024-12-14 12:46:51.219934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:51.649 [2024-12-14 12:46:51.219944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:51.649 [2024-12-14 12:46:51.219952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.649 [2024-12-14 12:46:51.221293] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:51.649 [2024-12-14 12:46:51.224794] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 326.715 ms, result 0 00:20:51.649 [2024-12-14 12:46:51.226239] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:51.649 [2024-12-14 12:46:51.239882] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:51.910  [2024-12-14T12:46:51.647Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-12-14 12:46:51.627718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:51.910 [2024-12-14 12:46:51.636785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.910 [2024-12-14 12:46:51.636836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:51.910 [2024-12-14 12:46:51.636854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:51.910 [2024-12-14 12:46:51.636863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.910 [2024-12-14 12:46:51.636888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:51.910 [2024-12-14 12:46:51.639856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.910 [2024-12-14 12:46:51.639900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:51.910 [2024-12-14 12:46:51.639912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.954 ms 00:20:51.910 [2024-12-14 12:46:51.639920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.910 [2024-12-14 12:46:51.642886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.910 [2024-12-14 12:46:51.643084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:51.910 [2024-12-14 12:46:51.643105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.934 ms 00:20:51.910 [2024-12-14 12:46:51.643113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.173 [2024-12-14 12:46:51.647596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.173 [2024-12-14 12:46:51.647636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:52.173 [2024-12-14 12:46:51.647648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.453 ms 00:20:52.173 [2024-12-14 12:46:51.647657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.173 [2024-12-14 12:46:51.654542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.173 [2024-12-14 12:46:51.654718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:52.173 [2024-12-14 12:46:51.654737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.850 ms 00:20:52.173 [2024-12-14 12:46:51.654745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.173 [2024-12-14 12:46:51.680142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.173 [2024-12-14 12:46:51.680194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:52.173 [2024-12-14 12:46:51.680207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.347 ms 00:20:52.173 [2024-12-14 12:46:51.680214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.173 [2024-12-14 12:46:51.696392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.173 [2024-12-14 12:46:51.696445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:52.173 [2024-12-14 12:46:51.696459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.127 ms 00:20:52.173 [2024-12-14 12:46:51.696467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.173 [2024-12-14 12:46:51.696624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.173 [2024-12-14 12:46:51.696636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:52.173 [2024-12-14 12:46:51.696654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:52.173 [2024-12-14 12:46:51.696661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.173 [2024-12-14 12:46:51.722644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.173 [2024-12-14 12:46:51.722692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:52.173 [2024-12-14 12:46:51.722704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.965 ms 00:20:52.173 [2024-12-14 12:46:51.722711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.173 [2024-12-14 12:46:51.748369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.173 [2024-12-14 12:46:51.748417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:52.173 [2024-12-14 12:46:51.748429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.577 ms 00:20:52.173 [2024-12-14 12:46:51.748436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.173 [2024-12-14 12:46:51.772931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.173 [2024-12-14 12:46:51.772990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:52.173 [2024-12-14 12:46:51.773002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.445 ms 00:20:52.173 [2024-12-14 12:46:51.773009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.173 [2024-12-14 12:46:51.798069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.173 [2024-12-14 12:46:51.798117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:52.173 [2024-12-14 12:46:51.798128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.961 ms 00:20:52.173 [2024-12-14 12:46:51.798135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.173 [2024-12-14 12:46:51.798184] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:52.173 [2024-12-14 12:46:51.798200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:52.173 [2024-12-14 12:46:51.798636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:52.174 [2024-12-14 12:46:51.798969] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:52.174 [2024-12-14 12:46:51.798977] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 000400a1-d0eb-4378-a146-f83adc96f65b 00:20:52.174 [2024-12-14 12:46:51.798986] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:52.174 [2024-12-14 12:46:51.798994] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:52.174 [2024-12-14 12:46:51.799001] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:52.174 [2024-12-14 12:46:51.799009] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:52.174 [2024-12-14 12:46:51.799016] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:52.174 [2024-12-14 12:46:51.799024] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:52.174 [2024-12-14 12:46:51.799035] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:52.174 [2024-12-14 12:46:51.799042] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:52.174 [2024-12-14 12:46:51.799048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:52.174 [2024-12-14 12:46:51.799086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.174 [2024-12-14 12:46:51.799094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:52.174 [2024-12-14 12:46:51.799104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:20:52.174 [2024-12-14 12:46:51.799111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.174 [2024-12-14 12:46:51.812358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.174 [2024-12-14 12:46:51.812549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:52.174 [2024-12-14 12:46:51.812569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.213 ms 00:20:52.174 [2024-12-14 12:46:51.812577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.174 [2024-12-14 12:46:51.812979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.174 [2024-12-14 12:46:51.812990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:52.174 [2024-12-14 12:46:51.813000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.354 ms 00:20:52.174 [2024-12-14 12:46:51.813008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.174 [2024-12-14 12:46:51.851715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.174 [2024-12-14 12:46:51.851906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.174 [2024-12-14 12:46:51.851924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.174 [2024-12-14 12:46:51.851941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.174 [2024-12-14 12:46:51.852042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.174 [2024-12-14 12:46:51.852053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.174 [2024-12-14 12:46:51.852093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.174 [2024-12-14 12:46:51.852102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.174 [2024-12-14 12:46:51.852154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.174 [2024-12-14 12:46:51.852164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.174 [2024-12-14 12:46:51.852172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.174 [2024-12-14 12:46:51.852179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.174 [2024-12-14 12:46:51.852201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.174 [2024-12-14 12:46:51.852210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.174 [2024-12-14 12:46:51.852220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.174 [2024-12-14 12:46:51.852229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.435 [2024-12-14 12:46:51.935994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.435 [2024-12-14 12:46:51.936082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:52.435 [2024-12-14 12:46:51.936097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.435 [2024-12-14 12:46:51.936105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.435 [2024-12-14 12:46:52.004866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.435 [2024-12-14 12:46:52.004925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:52.435 [2024-12-14 12:46:52.004937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.435 [2024-12-14 12:46:52.004946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.435 [2024-12-14 12:46:52.005026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.435 [2024-12-14 12:46:52.005036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.435 [2024-12-14 12:46:52.005045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.435 [2024-12-14 12:46:52.005074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.435 [2024-12-14 12:46:52.005109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.435 [2024-12-14 12:46:52.005126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.435 [2024-12-14 12:46:52.005135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.435 [2024-12-14 12:46:52.005143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.435 [2024-12-14 12:46:52.005249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.435 [2024-12-14 12:46:52.005259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.435 [2024-12-14 12:46:52.005267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.435 [2024-12-14 12:46:52.005278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.435 [2024-12-14 12:46:52.005318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.435 [2024-12-14 12:46:52.005329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:52.435 [2024-12-14 12:46:52.005341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.435 [2024-12-14 12:46:52.005349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.435 [2024-12-14 12:46:52.005394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.435 [2024-12-14 12:46:52.005405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.435 [2024-12-14 12:46:52.005413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.435 [2024-12-14 12:46:52.005422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.435 [2024-12-14 12:46:52.005471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.435 [2024-12-14 12:46:52.005486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.435 [2024-12-14 12:46:52.005495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.435 [2024-12-14 12:46:52.005503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.435 [2024-12-14 12:46:52.005677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.885 ms, result 0 00:20:53.376 00:20:53.376 00:20:53.376 12:46:52 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78714 00:20:53.376 12:46:52 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78714 00:20:53.376 12:46:52 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:53.376 12:46:52 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78714 ']' 00:20:53.376 12:46:52 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.376 12:46:52 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.376 12:46:52 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.376 12:46:52 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.376 12:46:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:53.376 [2024-12-14 12:46:52.890194] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:53.376 [2024-12-14 12:46:52.890944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78714 ] 00:20:53.376 [2024-12-14 12:46:53.053773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.637 [2024-12-14 12:46:53.178004] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.209 12:46:53 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.209 12:46:53 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:54.209 12:46:53 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:54.470 [2024-12-14 12:46:54.080589] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.470 [2024-12-14 12:46:54.080670] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.731 [2024-12-14 12:46:54.261775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.731 [2024-12-14 12:46:54.261835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:54.731 [2024-12-14 12:46:54.261852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:54.731 [2024-12-14 12:46:54.261861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.731 [2024-12-14 12:46:54.264865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.731 [2024-12-14 12:46:54.265092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.731 [2024-12-14 12:46:54.265119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.981 ms 00:20:54.731 [2024-12-14 12:46:54.265128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.731 [2024-12-14 12:46:54.265583] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:54.731 [2024-12-14 12:46:54.266397] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:54.731 [2024-12-14 12:46:54.266448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.731 [2024-12-14 12:46:54.266459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.731 [2024-12-14 12:46:54.266472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.895 ms 00:20:54.731 [2024-12-14 12:46:54.266481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.731 [2024-12-14 12:46:54.268334] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:54.731 [2024-12-14 12:46:54.283070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.731 [2024-12-14 12:46:54.283126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:54.732 [2024-12-14 12:46:54.283141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.741 ms 00:20:54.732 [2024-12-14 12:46:54.283152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.732 [2024-12-14 12:46:54.283273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.732 [2024-12-14 12:46:54.283287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:54.732 [2024-12-14 12:46:54.283297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:54.732 [2024-12-14 12:46:54.283307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.732 [2024-12-14 12:46:54.291931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.732 [2024-12-14 12:46:54.291983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.732 [2024-12-14 12:46:54.291995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.563 ms 00:20:54.732 [2024-12-14 12:46:54.292005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.732 [2024-12-14 12:46:54.292151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.732 [2024-12-14 12:46:54.292166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.732 [2024-12-14 12:46:54.292176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:54.732 [2024-12-14 12:46:54.292192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.732 [2024-12-14 12:46:54.292221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.732 [2024-12-14 12:46:54.292232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:54.732 [2024-12-14 12:46:54.292241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:54.732 [2024-12-14 12:46:54.292251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.732 [2024-12-14 12:46:54.292276] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:54.732 [2024-12-14 12:46:54.296293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.732 [2024-12-14 12:46:54.296332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.732 [2024-12-14 12:46:54.296346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.021 ms 00:20:54.732 [2024-12-14 12:46:54.296355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.732 [2024-12-14 12:46:54.296438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.732 [2024-12-14 12:46:54.296448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:54.732 [2024-12-14 12:46:54.296460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:54.732 [2024-12-14 12:46:54.296471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.732 [2024-12-14 12:46:54.296494] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:54.732 [2024-12-14 12:46:54.296517] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:54.732 [2024-12-14 12:46:54.296568] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:54.732 [2024-12-14 12:46:54.296584] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:54.732 [2024-12-14 12:46:54.296695] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:54.732 [2024-12-14 12:46:54.296708] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:54.732 [2024-12-14 12:46:54.296724] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:54.732 [2024-12-14 12:46:54.296736] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:54.732 [2024-12-14 12:46:54.296747] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:54.732 [2024-12-14 12:46:54.296759] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:54.732 [2024-12-14 12:46:54.296768] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:54.732 [2024-12-14 12:46:54.296776] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:54.732 [2024-12-14 12:46:54.296791] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:54.732 [2024-12-14 12:46:54.296801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.732 [2024-12-14 12:46:54.296811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:54.732 [2024-12-14 12:46:54.296820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:20:54.732 [2024-12-14 12:46:54.296831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.732 [2024-12-14 12:46:54.296921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.732 [2024-12-14 12:46:54.296933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:54.732 [2024-12-14 12:46:54.296941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:54.732 [2024-12-14 12:46:54.296952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.732 [2024-12-14 12:46:54.297084] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:54.732 [2024-12-14 12:46:54.297099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:54.732 [2024-12-14 12:46:54.297109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.732 [2024-12-14 12:46:54.297120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.732 [2024-12-14 12:46:54.297128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:54.732 [2024-12-14 12:46:54.297141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:54.732 [2024-12-14 12:46:54.297149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:54.732 [2024-12-14 12:46:54.297163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:54.732 [2024-12-14 12:46:54.297172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:54.732 [2024-12-14 12:46:54.297184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.732 [2024-12-14 12:46:54.297194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:54.732 [2024-12-14 12:46:54.297203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:54.732 [2024-12-14 12:46:54.297222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.732 [2024-12-14 12:46:54.297233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:54.732 [2024-12-14 12:46:54.297241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:54.732 [2024-12-14 12:46:54.297249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.732 [2024-12-14 12:46:54.297257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:54.732 [2024-12-14 12:46:54.297266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:54.732 [2024-12-14 12:46:54.297281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.732 [2024-12-14 12:46:54.297291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:54.732 [2024-12-14 12:46:54.297299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:54.732 [2024-12-14 12:46:54.297309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.732 [2024-12-14 12:46:54.297316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:54.732 [2024-12-14 12:46:54.297329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:54.732 [2024-12-14 12:46:54.297337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.732 [2024-12-14 12:46:54.297346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:54.732 [2024-12-14 12:46:54.297352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:54.732 [2024-12-14 12:46:54.297362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.732 [2024-12-14 12:46:54.297370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:54.732 [2024-12-14 12:46:54.297381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:54.732 [2024-12-14 12:46:54.297388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.732 [2024-12-14 12:46:54.297405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:54.732 [2024-12-14 12:46:54.297413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:54.732 [2024-12-14 12:46:54.297422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.732 [2024-12-14 12:46:54.297429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:54.732 [2024-12-14 12:46:54.297437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:54.732 [2024-12-14 12:46:54.297450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.732 [2024-12-14 12:46:54.297459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:54.732 [2024-12-14 12:46:54.297466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:54.732 [2024-12-14 12:46:54.297476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.732 [2024-12-14 12:46:54.297484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:54.732 [2024-12-14 12:46:54.297493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:54.732 [2024-12-14 12:46:54.297500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.732 [2024-12-14 12:46:54.297508] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:54.732 [2024-12-14 12:46:54.297518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:54.732 [2024-12-14 12:46:54.297529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.732 [2024-12-14 12:46:54.297537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.732 [2024-12-14 12:46:54.297546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:54.732 [2024-12-14 12:46:54.297553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:54.732 [2024-12-14 12:46:54.297564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:54.732 [2024-12-14 12:46:54.297570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:54.732 [2024-12-14 12:46:54.297578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:54.732 [2024-12-14 12:46:54.297585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:54.732 [2024-12-14 12:46:54.297596] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:54.732 [2024-12-14 12:46:54.297635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.732 [2024-12-14 12:46:54.297650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:54.732 [2024-12-14 12:46:54.297659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:54.732 [2024-12-14 12:46:54.297669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:54.732 [2024-12-14 12:46:54.297690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:54.733 [2024-12-14 12:46:54.297700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:54.733 [2024-12-14 12:46:54.297707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:54.733 [2024-12-14 12:46:54.297718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:54.733 [2024-12-14 12:46:54.297727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:54.733 [2024-12-14 12:46:54.297738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:54.733 [2024-12-14 12:46:54.297746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:54.733 [2024-12-14 12:46:54.297756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:54.733 [2024-12-14 12:46:54.297763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:54.733 [2024-12-14 12:46:54.297773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:54.733 [2024-12-14 12:46:54.297781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:54.733 [2024-12-14 12:46:54.297790] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:54.733 [2024-12-14 12:46:54.297799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.733 [2024-12-14 12:46:54.297811] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:54.733 [2024-12-14 12:46:54.297818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:54.733 [2024-12-14 12:46:54.297827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:54.733 [2024-12-14 12:46:54.297834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:54.733 [2024-12-14 12:46:54.297844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.733 [2024-12-14 12:46:54.297853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:54.733 [2024-12-14 12:46:54.297864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:20:54.733 [2024-12-14 12:46:54.297874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.733 [2024-12-14 12:46:54.330132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.733 [2024-12-14 12:46:54.330358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.733 [2024-12-14 12:46:54.330382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.193 ms 00:20:54.733 [2024-12-14 12:46:54.330395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.733 [2024-12-14 12:46:54.330536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.733 [2024-12-14 12:46:54.330549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:54.733 [2024-12-14 12:46:54.330562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:54.733 [2024-12-14 12:46:54.330570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.733 [2024-12-14 12:46:54.365730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.733 [2024-12-14 12:46:54.365942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.733 [2024-12-14 12:46:54.365967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.132 ms 00:20:54.733 [2024-12-14 12:46:54.365977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.733 [2024-12-14 12:46:54.366103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.733 [2024-12-14 12:46:54.366116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.733 [2024-12-14 12:46:54.366130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:54.733 [2024-12-14 12:46:54.366138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.733 [2024-12-14 12:46:54.366689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.733 [2024-12-14 12:46:54.366711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.733 [2024-12-14 12:46:54.366727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:20:54.733 [2024-12-14 12:46:54.366735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.733 [2024-12-14 12:46:54.366883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.733 [2024-12-14 12:46:54.366894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.733 [2024-12-14 12:46:54.366907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:20:54.733 [2024-12-14 12:46:54.366916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.733 [2024-12-14 12:46:54.385028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.733 [2024-12-14 12:46:54.385249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.733 [2024-12-14 12:46:54.385276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.084 ms 00:20:54.733 [2024-12-14 12:46:54.385285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.733 [2024-12-14 12:46:54.413214] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:54.733 [2024-12-14 12:46:54.413441] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:54.733 [2024-12-14 12:46:54.413471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.733 [2024-12-14 12:46:54.413482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:54.733 [2024-12-14 12:46:54.413496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.057 ms 00:20:54.733 [2024-12-14 12:46:54.413512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.733 [2024-12-14 12:46:54.439940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.733 [2024-12-14 12:46:54.439995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:54.733 [2024-12-14 12:46:54.440011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.212 ms 00:20:54.733 [2024-12-14 12:46:54.440020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.733 [2024-12-14 12:46:54.453121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.733 [2024-12-14 12:46:54.453166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:54.733 [2024-12-14 12:46:54.453185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.842 ms 00:20:54.733 [2024-12-14 12:46:54.453192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.994 [2024-12-14 12:46:54.465958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.994 [2024-12-14 12:46:54.466007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:54.994 [2024-12-14 12:46:54.466023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.672 ms 00:20:54.994 [2024-12-14 12:46:54.466031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.994 [2024-12-14 12:46:54.466746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.994 [2024-12-14 12:46:54.466784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:54.994 [2024-12-14 12:46:54.466798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:20:54.994 [2024-12-14 12:46:54.466807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.994 [2024-12-14 12:46:54.533393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.994 [2024-12-14 12:46:54.533460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:54.994 [2024-12-14 12:46:54.533479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.554 ms 00:20:54.994 [2024-12-14 12:46:54.533489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.994 [2024-12-14 12:46:54.545248] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:54.994 [2024-12-14 12:46:54.565481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.994 [2024-12-14 12:46:54.565544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:54.994 [2024-12-14 12:46:54.565562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.880 ms 00:20:54.994 [2024-12-14 12:46:54.565572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.994 [2024-12-14 12:46:54.565692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.994 [2024-12-14 12:46:54.565705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:54.994 [2024-12-14 12:46:54.565715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:54.994 [2024-12-14 12:46:54.565726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.994 [2024-12-14 12:46:54.565784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.994 [2024-12-14 12:46:54.565797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:54.994 [2024-12-14 12:46:54.565806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:54.994 [2024-12-14 12:46:54.565819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.994 [2024-12-14 12:46:54.565845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.994 [2024-12-14 12:46:54.565855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:54.994 [2024-12-14 12:46:54.565863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:54.994 [2024-12-14 12:46:54.565877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.995 [2024-12-14 12:46:54.565913] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:54.995 [2024-12-14 12:46:54.565927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.995 [2024-12-14 12:46:54.565938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:54.995 [2024-12-14 12:46:54.565948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:54.995 [2024-12-14 12:46:54.565957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.995 [2024-12-14 12:46:54.592587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.995 [2024-12-14 12:46:54.592820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:54.995 [2024-12-14 12:46:54.592850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.597 ms 00:20:54.995 [2024-12-14 12:46:54.592860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.995 [2024-12-14 12:46:54.593140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.995 [2024-12-14 12:46:54.593169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:54.995 [2024-12-14 12:46:54.593182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:54.995 [2024-12-14 12:46:54.593195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.995 [2024-12-14 12:46:54.594382] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:54.995 [2024-12-14 12:46:54.597837] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 332.249 ms, result 0 00:20:54.995 [2024-12-14 12:46:54.600246] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:54.995 Some configs were skipped because the RPC state that can call them passed over. 00:20:54.995 12:46:54 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:55.255 [2024-12-14 12:46:54.844580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.255 [2024-12-14 12:46:54.844649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:55.255 [2024-12-14 12:46:54.844665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.811 ms 00:20:55.255 [2024-12-14 12:46:54.844676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.255 [2024-12-14 12:46:54.844714] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.951 ms, result 0 00:20:55.255 true 00:20:55.255 12:46:54 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:55.514 [2024-12-14 12:46:55.063012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.514 [2024-12-14 12:46:55.063084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:55.514 [2024-12-14 12:46:55.063101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.442 ms 00:20:55.514 [2024-12-14 12:46:55.063110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.514 [2024-12-14 12:46:55.063151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.590 ms, result 0 00:20:55.514 true 00:20:55.514 12:46:55 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78714 00:20:55.514 12:46:55 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78714 ']' 00:20:55.514 12:46:55 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78714 00:20:55.514 12:46:55 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:55.514 12:46:55 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.514 12:46:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78714 00:20:55.514 killing process with pid 78714 00:20:55.514 12:46:55 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:55.514 12:46:55 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:55.514 12:46:55 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78714' 00:20:55.514 12:46:55 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78714 00:20:55.514 12:46:55 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78714 00:20:56.457 [2024-12-14 12:46:55.847458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.457 [2024-12-14 12:46:55.847527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:56.457 [2024-12-14 12:46:55.847540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:56.457 [2024-12-14 12:46:55.847548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.457 [2024-12-14 12:46:55.847570] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:56.457 [2024-12-14 12:46:55.849948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.457 [2024-12-14 12:46:55.849986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:56.457 [2024-12-14 12:46:55.850001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.359 ms 00:20:56.457 [2024-12-14 12:46:55.850007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.457 [2024-12-14 12:46:55.850265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.457 [2024-12-14 12:46:55.850278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:56.457 [2024-12-14 12:46:55.850288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:20:56.457 [2024-12-14 12:46:55.850295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.457 [2024-12-14 12:46:55.853746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.457 [2024-12-14 12:46:55.853783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:56.457 [2024-12-14 12:46:55.853797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.431 ms 00:20:56.457 [2024-12-14 12:46:55.853804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.457 [2024-12-14 12:46:55.859100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.457 [2024-12-14 12:46:55.859136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:56.457 [2024-12-14 12:46:55.859149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.254 ms 00:20:56.457 [2024-12-14 12:46:55.859156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.457 [2024-12-14 12:46:55.867708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.457 [2024-12-14 12:46:55.867752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:56.457 [2024-12-14 12:46:55.867764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.492 ms 00:20:56.457 [2024-12-14 12:46:55.867771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.457 [2024-12-14 12:46:55.874797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.457 [2024-12-14 12:46:55.874836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:56.457 [2024-12-14 12:46:55.874846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.983 ms 00:20:56.457 [2024-12-14 12:46:55.874852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.457 [2024-12-14 12:46:55.874973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.457 [2024-12-14 12:46:55.874981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:56.457 [2024-12-14 12:46:55.874991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:56.457 [2024-12-14 12:46:55.874998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.457 [2024-12-14 12:46:55.883995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.457 [2024-12-14 12:46:55.884029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:56.457 [2024-12-14 12:46:55.884039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.978 ms 00:20:56.457 [2024-12-14 12:46:55.884045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.457 [2024-12-14 12:46:55.891687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.457 [2024-12-14 12:46:55.891717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:56.457 [2024-12-14 12:46:55.891729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.585 ms 00:20:56.457 [2024-12-14 12:46:55.891735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.457 [2024-12-14 12:46:55.898995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.457 [2024-12-14 12:46:55.899025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:56.457 [2024-12-14 12:46:55.899034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.222 ms 00:20:56.457 [2024-12-14 12:46:55.899040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.457 [2024-12-14 12:46:55.906726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.457 [2024-12-14 12:46:55.906754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:56.457 [2024-12-14 12:46:55.906763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.616 ms 00:20:56.458 [2024-12-14 12:46:55.906769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.458 [2024-12-14 12:46:55.906800] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:56.458 [2024-12-14 12:46:55.906812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.906998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:56.458 [2024-12-14 12:46:55.907436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:56.459 [2024-12-14 12:46:55.907443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:56.459 [2024-12-14 12:46:55.907449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:56.459 [2024-12-14 12:46:55.907456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:56.459 [2024-12-14 12:46:55.907464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:56.459 [2024-12-14 12:46:55.907471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:56.459 [2024-12-14 12:46:55.907477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:56.459 [2024-12-14 12:46:55.907484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:56.459 [2024-12-14 12:46:55.907489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:56.459 [2024-12-14 12:46:55.907498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:56.459 [2024-12-14 12:46:55.907504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:56.459 [2024-12-14 12:46:55.907510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:56.459 [2024-12-14 12:46:55.907527] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:56.459 [2024-12-14 12:46:55.907538] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 000400a1-d0eb-4378-a146-f83adc96f65b 00:20:56.459 [2024-12-14 12:46:55.907546] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:56.459 [2024-12-14 12:46:55.907553] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:56.459 [2024-12-14 12:46:55.907559] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:56.459 [2024-12-14 12:46:55.907566] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:56.459 [2024-12-14 12:46:55.907571] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:56.459 [2024-12-14 12:46:55.907579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:56.459 [2024-12-14 12:46:55.907584] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:56.459 [2024-12-14 12:46:55.907590] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:56.459 [2024-12-14 12:46:55.907596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:56.459 [2024-12-14 12:46:55.907604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.459 [2024-12-14 12:46:55.907610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:56.459 [2024-12-14 12:46:55.907618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:20:56.459 [2024-12-14 12:46:55.907624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:55.917560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.459 [2024-12-14 12:46:55.917708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:56.459 [2024-12-14 12:46:55.917727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.916 ms 00:20:56.459 [2024-12-14 12:46:55.917734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:55.918036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.459 [2024-12-14 12:46:55.918052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:56.459 [2024-12-14 12:46:55.918081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:20:56.459 [2024-12-14 12:46:55.918087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:55.953382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.459 [2024-12-14 12:46:55.953408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:56.459 [2024-12-14 12:46:55.953418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.459 [2024-12-14 12:46:55.953425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:55.953498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.459 [2024-12-14 12:46:55.953506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:56.459 [2024-12-14 12:46:55.953515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.459 [2024-12-14 12:46:55.953521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:55.953559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.459 [2024-12-14 12:46:55.953567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:56.459 [2024-12-14 12:46:55.953575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.459 [2024-12-14 12:46:55.953581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:55.953596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.459 [2024-12-14 12:46:55.953619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:56.459 [2024-12-14 12:46:55.953626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.459 [2024-12-14 12:46:55.953633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:56.012871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.459 [2024-12-14 12:46:56.012989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:56.459 [2024-12-14 12:46:56.013006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.459 [2024-12-14 12:46:56.013012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:56.060562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.459 [2024-12-14 12:46:56.060591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:56.459 [2024-12-14 12:46:56.060601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.459 [2024-12-14 12:46:56.060609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:56.060664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.459 [2024-12-14 12:46:56.060672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:56.459 [2024-12-14 12:46:56.060681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.459 [2024-12-14 12:46:56.060686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:56.060710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.459 [2024-12-14 12:46:56.060716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:56.459 [2024-12-14 12:46:56.060723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.459 [2024-12-14 12:46:56.060729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:56.060798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.459 [2024-12-14 12:46:56.060807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:56.459 [2024-12-14 12:46:56.060815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.459 [2024-12-14 12:46:56.060820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:56.060846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.459 [2024-12-14 12:46:56.060852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:56.459 [2024-12-14 12:46:56.060860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.459 [2024-12-14 12:46:56.060865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:56.060896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.459 [2024-12-14 12:46:56.060902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:56.459 [2024-12-14 12:46:56.060911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.459 [2024-12-14 12:46:56.060917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:56.060952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.459 [2024-12-14 12:46:56.060960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:56.459 [2024-12-14 12:46:56.060967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.459 [2024-12-14 12:46:56.060972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.459 [2024-12-14 12:46:56.061100] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 213.603 ms, result 0 00:20:57.030 12:46:56 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:57.031 [2024-12-14 12:46:56.640033] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:57.031 [2024-12-14 12:46:56.640171] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78768 ] 00:20:57.292 [2024-12-14 12:46:56.795690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.292 [2024-12-14 12:46:56.870848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.554 [2024-12-14 12:46:57.080976] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:57.554 [2024-12-14 12:46:57.081026] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:57.554 [2024-12-14 12:46:57.235368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.554 [2024-12-14 12:46:57.235510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:57.554 [2024-12-14 12:46:57.235525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:57.554 [2024-12-14 12:46:57.235532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.554 [2024-12-14 12:46:57.237635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.554 [2024-12-14 12:46:57.237664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:57.554 [2024-12-14 12:46:57.237672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.086 ms 00:20:57.554 [2024-12-14 12:46:57.237677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.554 [2024-12-14 12:46:57.237738] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:57.554 [2024-12-14 12:46:57.238302] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:57.554 [2024-12-14 12:46:57.238321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.554 [2024-12-14 12:46:57.238327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:57.554 [2024-12-14 12:46:57.238334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:20:57.554 [2024-12-14 12:46:57.238340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.554 [2024-12-14 12:46:57.239559] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:57.554 [2024-12-14 12:46:57.249350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.554 [2024-12-14 12:46:57.249377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:57.554 [2024-12-14 12:46:57.249387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.792 ms 00:20:57.554 [2024-12-14 12:46:57.249394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.554 [2024-12-14 12:46:57.249461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.554 [2024-12-14 12:46:57.249470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:57.554 [2024-12-14 12:46:57.249477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:57.554 [2024-12-14 12:46:57.249482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.554 [2024-12-14 12:46:57.254178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.554 [2024-12-14 12:46:57.254201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:57.554 [2024-12-14 12:46:57.254208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.666 ms 00:20:57.554 [2024-12-14 12:46:57.254214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.554 [2024-12-14 12:46:57.254287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.554 [2024-12-14 12:46:57.254295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:57.554 [2024-12-14 12:46:57.254301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:57.554 [2024-12-14 12:46:57.254307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.554 [2024-12-14 12:46:57.254326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.554 [2024-12-14 12:46:57.254333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:57.554 [2024-12-14 12:46:57.254339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:57.554 [2024-12-14 12:46:57.254344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.554 [2024-12-14 12:46:57.254363] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:57.554 [2024-12-14 12:46:57.256963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.554 [2024-12-14 12:46:57.256985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:57.554 [2024-12-14 12:46:57.256992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.605 ms 00:20:57.554 [2024-12-14 12:46:57.256999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.554 [2024-12-14 12:46:57.257028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.554 [2024-12-14 12:46:57.257035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:57.554 [2024-12-14 12:46:57.257041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:57.554 [2024-12-14 12:46:57.257047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.554 [2024-12-14 12:46:57.257076] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:57.554 [2024-12-14 12:46:57.257093] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:57.554 [2024-12-14 12:46:57.257129] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:57.554 [2024-12-14 12:46:57.257141] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:57.554 [2024-12-14 12:46:57.257220] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:57.554 [2024-12-14 12:46:57.257229] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:57.554 [2024-12-14 12:46:57.257237] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:57.554 [2024-12-14 12:46:57.257246] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:57.554 [2024-12-14 12:46:57.257253] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:57.554 [2024-12-14 12:46:57.257260] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:57.554 [2024-12-14 12:46:57.257267] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:57.554 [2024-12-14 12:46:57.257272] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:57.554 [2024-12-14 12:46:57.257278] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:57.554 [2024-12-14 12:46:57.257285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.554 [2024-12-14 12:46:57.257290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:57.554 [2024-12-14 12:46:57.257296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:20:57.554 [2024-12-14 12:46:57.257301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.554 [2024-12-14 12:46:57.257368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.554 [2024-12-14 12:46:57.257377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:57.554 [2024-12-14 12:46:57.257384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:57.554 [2024-12-14 12:46:57.257389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.554 [2024-12-14 12:46:57.257461] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:57.554 [2024-12-14 12:46:57.257470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:57.554 [2024-12-14 12:46:57.257476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:57.554 [2024-12-14 12:46:57.257482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.554 [2024-12-14 12:46:57.257488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:57.554 [2024-12-14 12:46:57.257493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:57.554 [2024-12-14 12:46:57.257498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:57.554 [2024-12-14 12:46:57.257504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:57.554 [2024-12-14 12:46:57.257509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:57.554 [2024-12-14 12:46:57.257515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:57.554 [2024-12-14 12:46:57.257520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:57.554 [2024-12-14 12:46:57.257538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:57.554 [2024-12-14 12:46:57.257544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:57.554 [2024-12-14 12:46:57.257550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:57.554 [2024-12-14 12:46:57.257556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:57.554 [2024-12-14 12:46:57.257563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.554 [2024-12-14 12:46:57.257569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:57.554 [2024-12-14 12:46:57.257574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:57.554 [2024-12-14 12:46:57.257579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.554 [2024-12-14 12:46:57.257584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:57.554 [2024-12-14 12:46:57.257590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:57.554 [2024-12-14 12:46:57.257596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.554 [2024-12-14 12:46:57.257610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:57.554 [2024-12-14 12:46:57.257615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:57.555 [2024-12-14 12:46:57.257620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.555 [2024-12-14 12:46:57.257625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:57.555 [2024-12-14 12:46:57.257630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:57.555 [2024-12-14 12:46:57.257635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.555 [2024-12-14 12:46:57.257641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:57.555 [2024-12-14 12:46:57.257646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:57.555 [2024-12-14 12:46:57.257651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.555 [2024-12-14 12:46:57.257656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:57.555 [2024-12-14 12:46:57.257661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:57.555 [2024-12-14 12:46:57.257667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:57.555 [2024-12-14 12:46:57.257672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:57.555 [2024-12-14 12:46:57.257677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:57.555 [2024-12-14 12:46:57.257682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:57.555 [2024-12-14 12:46:57.257687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:57.555 [2024-12-14 12:46:57.257692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:57.555 [2024-12-14 12:46:57.257698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.555 [2024-12-14 12:46:57.257703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:57.555 [2024-12-14 12:46:57.257708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:57.555 [2024-12-14 12:46:57.257713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.555 [2024-12-14 12:46:57.257717] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:57.555 [2024-12-14 12:46:57.257724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:57.555 [2024-12-14 12:46:57.257731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:57.555 [2024-12-14 12:46:57.257737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.555 [2024-12-14 12:46:57.257742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:57.555 [2024-12-14 12:46:57.257748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:57.555 [2024-12-14 12:46:57.257753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:57.555 [2024-12-14 12:46:57.257758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:57.555 [2024-12-14 12:46:57.257763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:57.555 [2024-12-14 12:46:57.257769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:57.555 [2024-12-14 12:46:57.257774] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:57.555 [2024-12-14 12:46:57.257781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:57.555 [2024-12-14 12:46:57.257787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:57.555 [2024-12-14 12:46:57.257793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:57.555 [2024-12-14 12:46:57.257799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:57.555 [2024-12-14 12:46:57.257804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:57.555 [2024-12-14 12:46:57.257809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:57.555 [2024-12-14 12:46:57.257814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:57.555 [2024-12-14 12:46:57.257820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:57.555 [2024-12-14 12:46:57.257825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:57.555 [2024-12-14 12:46:57.257830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:57.555 [2024-12-14 12:46:57.257835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:57.555 [2024-12-14 12:46:57.257842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:57.555 [2024-12-14 12:46:57.257847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:57.555 [2024-12-14 12:46:57.257852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:57.555 [2024-12-14 12:46:57.257858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:57.555 [2024-12-14 12:46:57.257863] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:57.555 [2024-12-14 12:46:57.257869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:57.555 [2024-12-14 12:46:57.257877] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:57.555 [2024-12-14 12:46:57.257882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:57.555 [2024-12-14 12:46:57.257888] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:57.555 [2024-12-14 12:46:57.257894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:57.555 [2024-12-14 12:46:57.257899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.555 [2024-12-14 12:46:57.257907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:57.555 [2024-12-14 12:46:57.257913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:20:57.555 [2024-12-14 12:46:57.257918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.555 [2024-12-14 12:46:57.278858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.555 [2024-12-14 12:46:57.278884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:57.555 [2024-12-14 12:46:57.278892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.898 ms 00:20:57.555 [2024-12-14 12:46:57.278898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.555 [2024-12-14 12:46:57.278992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.555 [2024-12-14 12:46:57.278999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:57.555 [2024-12-14 12:46:57.279006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:57.555 [2024-12-14 12:46:57.279011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.319886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.319998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:57.816 [2024-12-14 12:46:57.320015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.859 ms 00:20:57.816 [2024-12-14 12:46:57.320022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.320097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.320106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:57.816 [2024-12-14 12:46:57.320113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:57.816 [2024-12-14 12:46:57.320119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.320403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.320416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:57.816 [2024-12-14 12:46:57.320423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:20:57.816 [2024-12-14 12:46:57.320434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.320535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.320543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:57.816 [2024-12-14 12:46:57.320550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:57.816 [2024-12-14 12:46:57.320556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.331344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.331368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:57.816 [2024-12-14 12:46:57.331376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.771 ms 00:20:57.816 [2024-12-14 12:46:57.331381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.341423] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:57.816 [2024-12-14 12:46:57.341449] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:57.816 [2024-12-14 12:46:57.341458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.341465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:57.816 [2024-12-14 12:46:57.341471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.989 ms 00:20:57.816 [2024-12-14 12:46:57.341477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.360051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.360084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:57.816 [2024-12-14 12:46:57.360093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.518 ms 00:20:57.816 [2024-12-14 12:46:57.360099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.369311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.369424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:57.816 [2024-12-14 12:46:57.369437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.156 ms 00:20:57.816 [2024-12-14 12:46:57.369443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.378388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.378413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:57.816 [2024-12-14 12:46:57.378421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.906 ms 00:20:57.816 [2024-12-14 12:46:57.378427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.378888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.378905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:57.816 [2024-12-14 12:46:57.378912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:20:57.816 [2024-12-14 12:46:57.378918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.422550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.422587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:57.816 [2024-12-14 12:46:57.422597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.615 ms 00:20:57.816 [2024-12-14 12:46:57.422603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.430342] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:57.816 [2024-12-14 12:46:57.441654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.441683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:57.816 [2024-12-14 12:46:57.441692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.987 ms 00:20:57.816 [2024-12-14 12:46:57.441703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.441776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.441785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:57.816 [2024-12-14 12:46:57.441792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:57.816 [2024-12-14 12:46:57.441797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.441835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.441842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:57.816 [2024-12-14 12:46:57.441848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:57.816 [2024-12-14 12:46:57.441856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.441880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.441888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:57.816 [2024-12-14 12:46:57.441894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:57.816 [2024-12-14 12:46:57.441900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.441924] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:57.816 [2024-12-14 12:46:57.441931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.441936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:57.816 [2024-12-14 12:46:57.441942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:57.816 [2024-12-14 12:46:57.441948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.460631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.460771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:57.816 [2024-12-14 12:46:57.460785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.668 ms 00:20:57.816 [2024-12-14 12:46:57.460792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.460860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.816 [2024-12-14 12:46:57.460869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:57.816 [2024-12-14 12:46:57.460876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:57.816 [2024-12-14 12:46:57.460882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.816 [2024-12-14 12:46:57.461531] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:57.816 [2024-12-14 12:46:57.463952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 225.949 ms, result 0 00:20:57.816 [2024-12-14 12:46:57.465210] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:57.816 [2024-12-14 12:46:57.476008] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:59.203  [2024-12-14T12:46:59.513Z] Copying: 19/256 [MB] (19 MBps) [2024-12-14T12:47:00.899Z] Copying: 40/256 [MB] (20 MBps) [2024-12-14T12:47:01.842Z] Copying: 60/256 [MB] (20 MBps) [2024-12-14T12:47:02.787Z] Copying: 79/256 [MB] (18 MBps) [2024-12-14T12:47:03.731Z] Copying: 96/256 [MB] (17 MBps) [2024-12-14T12:47:04.676Z] Copying: 109/256 [MB] (13 MBps) [2024-12-14T12:47:05.619Z] Copying: 122/256 [MB] (12 MBps) [2024-12-14T12:47:06.562Z] Copying: 132/256 [MB] (10 MBps) [2024-12-14T12:47:07.949Z] Copying: 147/256 [MB] (14 MBps) [2024-12-14T12:47:08.522Z] Copying: 163/256 [MB] (15 MBps) [2024-12-14T12:47:09.941Z] Copying: 175/256 [MB] (11 MBps) [2024-12-14T12:47:10.514Z] Copying: 200/256 [MB] (25 MBps) [2024-12-14T12:47:11.901Z] Copying: 216/256 [MB] (16 MBps) [2024-12-14T12:47:12.863Z] Copying: 228/256 [MB] (11 MBps) [2024-12-14T12:47:13.805Z] Copying: 238/256 [MB] (10 MBps) [2024-12-14T12:47:14.066Z] Copying: 249/256 [MB] (11 MBps) [2024-12-14T12:47:14.329Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-14 12:47:14.154609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:14.592 [2024-12-14 12:47:14.166788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.592 [2024-12-14 12:47:14.166841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:14.592 [2024-12-14 12:47:14.166863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:14.592 [2024-12-14 12:47:14.166873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.592 [2024-12-14 12:47:14.166902] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:14.592 [2024-12-14 12:47:14.170166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.592 [2024-12-14 12:47:14.170209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:14.592 [2024-12-14 12:47:14.170223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.247 ms 00:21:14.592 [2024-12-14 12:47:14.170234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.592 [2024-12-14 12:47:14.170532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.592 [2024-12-14 12:47:14.170552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:14.592 [2024-12-14 12:47:14.170563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:21:14.592 [2024-12-14 12:47:14.170573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.592 [2024-12-14 12:47:14.174289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.592 [2024-12-14 12:47:14.174311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:14.592 [2024-12-14 12:47:14.174321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.693 ms 00:21:14.592 [2024-12-14 12:47:14.174330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.592 [2024-12-14 12:47:14.181219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.592 [2024-12-14 12:47:14.181252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:14.592 [2024-12-14 12:47:14.181264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.869 ms 00:21:14.592 [2024-12-14 12:47:14.181272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.592 [2024-12-14 12:47:14.207335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.592 [2024-12-14 12:47:14.207377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:14.592 [2024-12-14 12:47:14.207389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.994 ms 00:21:14.592 [2024-12-14 12:47:14.207398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.592 [2024-12-14 12:47:14.223874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.592 [2024-12-14 12:47:14.223914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:14.592 [2024-12-14 12:47:14.223934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.421 ms 00:21:14.592 [2024-12-14 12:47:14.223943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.592 [2024-12-14 12:47:14.224131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.592 [2024-12-14 12:47:14.224147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:14.592 [2024-12-14 12:47:14.224167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:21:14.592 [2024-12-14 12:47:14.224176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.592 [2024-12-14 12:47:14.249996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.592 [2024-12-14 12:47:14.250036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:14.592 [2024-12-14 12:47:14.250048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.802 ms 00:21:14.592 [2024-12-14 12:47:14.250065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.592 [2024-12-14 12:47:14.275660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.592 [2024-12-14 12:47:14.275699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:14.592 [2024-12-14 12:47:14.275710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.545 ms 00:21:14.592 [2024-12-14 12:47:14.275719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.592 [2024-12-14 12:47:14.300884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.592 [2024-12-14 12:47:14.300924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:14.592 [2024-12-14 12:47:14.300937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.102 ms 00:21:14.592 [2024-12-14 12:47:14.300945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.592 [2024-12-14 12:47:14.325928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.592 [2024-12-14 12:47:14.325968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:14.592 [2024-12-14 12:47:14.325980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.901 ms 00:21:14.592 [2024-12-14 12:47:14.325988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.592 [2024-12-14 12:47:14.326036] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:14.592 [2024-12-14 12:47:14.326052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:14.592 [2024-12-14 12:47:14.326272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:14.593 [2024-12-14 12:47:14.326760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:14.855 [2024-12-14 12:47:14.326915] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:14.855 [2024-12-14 12:47:14.326924] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 000400a1-d0eb-4378-a146-f83adc96f65b 00:21:14.855 [2024-12-14 12:47:14.326934] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:14.855 [2024-12-14 12:47:14.326942] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:14.855 [2024-12-14 12:47:14.326951] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:14.855 [2024-12-14 12:47:14.326960] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:14.855 [2024-12-14 12:47:14.326968] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:14.855 [2024-12-14 12:47:14.326977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:14.855 [2024-12-14 12:47:14.326988] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:14.855 [2024-12-14 12:47:14.326995] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:14.855 [2024-12-14 12:47:14.327002] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:14.855 [2024-12-14 12:47:14.327009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.855 [2024-12-14 12:47:14.327017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:14.855 [2024-12-14 12:47:14.327026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:21:14.855 [2024-12-14 12:47:14.327033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.855 [2024-12-14 12:47:14.340733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.855 [2024-12-14 12:47:14.340769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:14.855 [2024-12-14 12:47:14.340780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.654 ms 00:21:14.855 [2024-12-14 12:47:14.340789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.855 [2024-12-14 12:47:14.341214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.855 [2024-12-14 12:47:14.341236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:14.855 [2024-12-14 12:47:14.341248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:21:14.855 [2024-12-14 12:47:14.341257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.855 [2024-12-14 12:47:14.380219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.855 [2024-12-14 12:47:14.380261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:14.855 [2024-12-14 12:47:14.380273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.855 [2024-12-14 12:47:14.380289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.855 [2024-12-14 12:47:14.380394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.855 [2024-12-14 12:47:14.380407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:14.855 [2024-12-14 12:47:14.380417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.855 [2024-12-14 12:47:14.380428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.855 [2024-12-14 12:47:14.380480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.855 [2024-12-14 12:47:14.380491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:14.855 [2024-12-14 12:47:14.380501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.855 [2024-12-14 12:47:14.380511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.855 [2024-12-14 12:47:14.380535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.855 [2024-12-14 12:47:14.380544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:14.855 [2024-12-14 12:47:14.380553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.855 [2024-12-14 12:47:14.380562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.855 [2024-12-14 12:47:14.465668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.855 [2024-12-14 12:47:14.465715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:14.856 [2024-12-14 12:47:14.465728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.856 [2024-12-14 12:47:14.465737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.856 [2024-12-14 12:47:14.534866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.856 [2024-12-14 12:47:14.534912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:14.856 [2024-12-14 12:47:14.534923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.856 [2024-12-14 12:47:14.534932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.856 [2024-12-14 12:47:14.535006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.856 [2024-12-14 12:47:14.535016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:14.856 [2024-12-14 12:47:14.535026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.856 [2024-12-14 12:47:14.535035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.856 [2024-12-14 12:47:14.535087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.856 [2024-12-14 12:47:14.535104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:14.856 [2024-12-14 12:47:14.535114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.856 [2024-12-14 12:47:14.535123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.856 [2024-12-14 12:47:14.535227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.856 [2024-12-14 12:47:14.535240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:14.856 [2024-12-14 12:47:14.535249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.856 [2024-12-14 12:47:14.535258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.856 [2024-12-14 12:47:14.535291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.856 [2024-12-14 12:47:14.535301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:14.856 [2024-12-14 12:47:14.535315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.856 [2024-12-14 12:47:14.535324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.856 [2024-12-14 12:47:14.535368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.856 [2024-12-14 12:47:14.535378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:14.856 [2024-12-14 12:47:14.535386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.856 [2024-12-14 12:47:14.535396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.856 [2024-12-14 12:47:14.535443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.856 [2024-12-14 12:47:14.535457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:14.856 [2024-12-14 12:47:14.535467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.856 [2024-12-14 12:47:14.535477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.856 [2024-12-14 12:47:14.535631] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.855 ms, result 0 00:21:15.799 00:21:15.799 00:21:15.799 12:47:15 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:16.387 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:16.387 12:47:15 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:16.387 12:47:15 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:16.387 12:47:15 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:16.387 12:47:15 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:16.387 12:47:15 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:16.387 12:47:15 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:16.387 Process with pid 78714 is not found 00:21:16.387 12:47:15 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78714 00:21:16.387 12:47:15 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78714 ']' 00:21:16.387 12:47:15 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78714 00:21:16.387 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78714) - No such process 00:21:16.387 12:47:15 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78714 is not found' 00:21:16.387 00:21:16.387 real 1m20.559s 00:21:16.387 user 1m36.757s 00:21:16.387 sys 0m15.514s 00:21:16.387 12:47:15 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.387 12:47:15 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:16.387 ************************************ 00:21:16.387 END TEST ftl_trim 00:21:16.387 ************************************ 00:21:16.387 12:47:16 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:16.387 12:47:16 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:16.387 12:47:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.387 12:47:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:16.387 ************************************ 00:21:16.387 START TEST ftl_restore 00:21:16.387 ************************************ 00:21:16.387 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:16.387 * Looking for test storage... 00:21:16.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:16.387 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:16.387 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:16.387 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:21:16.649 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.649 12:47:16 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:16.649 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.649 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:16.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.649 --rc genhtml_branch_coverage=1 00:21:16.649 --rc genhtml_function_coverage=1 00:21:16.649 --rc genhtml_legend=1 00:21:16.649 --rc geninfo_all_blocks=1 00:21:16.649 --rc geninfo_unexecuted_blocks=1 00:21:16.649 00:21:16.649 ' 00:21:16.649 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:16.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.649 --rc genhtml_branch_coverage=1 00:21:16.649 --rc genhtml_function_coverage=1 00:21:16.649 --rc genhtml_legend=1 00:21:16.649 --rc geninfo_all_blocks=1 00:21:16.649 --rc geninfo_unexecuted_blocks=1 00:21:16.649 00:21:16.649 ' 00:21:16.649 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:16.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.649 --rc genhtml_branch_coverage=1 00:21:16.649 --rc genhtml_function_coverage=1 00:21:16.649 --rc genhtml_legend=1 00:21:16.649 --rc geninfo_all_blocks=1 00:21:16.649 --rc geninfo_unexecuted_blocks=1 00:21:16.649 00:21:16.649 ' 00:21:16.649 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:16.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.649 --rc genhtml_branch_coverage=1 00:21:16.649 --rc genhtml_function_coverage=1 00:21:16.649 --rc genhtml_legend=1 00:21:16.649 --rc geninfo_all_blocks=1 00:21:16.649 --rc geninfo_unexecuted_blocks=1 00:21:16.649 00:21:16.649 ' 00:21:16.649 12:47:16 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:16.649 12:47:16 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:16.649 12:47:16 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:16.649 12:47:16 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:16.649 12:47:16 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:16.649 12:47:16 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:16.649 12:47:16 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:16.649 12:47:16 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:16.649 12:47:16 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:16.649 12:47:16 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.649 12:47:16 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.649 12:47:16 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:16.649 12:47:16 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:16.649 12:47:16 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.hYsdWjNIpc 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79035 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79035 00:21:16.650 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79035 ']' 00:21:16.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.650 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.650 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.650 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.650 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.650 12:47:16 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:16.650 12:47:16 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.650 [2024-12-14 12:47:16.319578] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:16.650 [2024-12-14 12:47:16.319719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79035 ] 00:21:16.928 [2024-12-14 12:47:16.483172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.928 [2024-12-14 12:47:16.603840] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:17.872 12:47:17 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.872 12:47:17 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:17.872 12:47:17 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:17.872 12:47:17 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:17.872 12:47:17 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:17.872 12:47:17 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:17.872 12:47:17 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:17.872 12:47:17 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:17.872 12:47:17 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:17.872 12:47:17 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:17.872 12:47:17 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:17.872 12:47:17 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:17.872 12:47:17 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:17.872 12:47:17 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:17.872 12:47:17 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:17.872 12:47:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:18.133 12:47:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:18.133 { 00:21:18.133 "name": "nvme0n1", 00:21:18.133 "aliases": [ 00:21:18.133 "504cf751-4698-4c4f-b700-bf31570289df" 00:21:18.133 ], 00:21:18.133 "product_name": "NVMe disk", 00:21:18.133 "block_size": 4096, 00:21:18.133 "num_blocks": 1310720, 00:21:18.133 "uuid": "504cf751-4698-4c4f-b700-bf31570289df", 00:21:18.133 "numa_id": -1, 00:21:18.133 "assigned_rate_limits": { 00:21:18.133 "rw_ios_per_sec": 0, 00:21:18.133 "rw_mbytes_per_sec": 0, 00:21:18.133 "r_mbytes_per_sec": 0, 00:21:18.133 "w_mbytes_per_sec": 0 00:21:18.133 }, 00:21:18.133 "claimed": true, 00:21:18.133 "claim_type": "read_many_write_one", 00:21:18.133 "zoned": false, 00:21:18.133 "supported_io_types": { 00:21:18.133 "read": true, 00:21:18.133 "write": true, 00:21:18.133 "unmap": true, 00:21:18.133 "flush": true, 00:21:18.133 "reset": true, 00:21:18.133 "nvme_admin": true, 00:21:18.133 "nvme_io": true, 00:21:18.133 "nvme_io_md": false, 00:21:18.133 "write_zeroes": true, 00:21:18.133 "zcopy": false, 00:21:18.133 "get_zone_info": false, 00:21:18.133 "zone_management": false, 00:21:18.133 "zone_append": false, 00:21:18.133 "compare": true, 00:21:18.133 "compare_and_write": false, 00:21:18.133 "abort": true, 00:21:18.133 "seek_hole": false, 00:21:18.133 "seek_data": false, 00:21:18.133 "copy": true, 00:21:18.133 "nvme_iov_md": false 00:21:18.133 }, 00:21:18.133 "driver_specific": { 00:21:18.133 "nvme": [ 00:21:18.133 { 00:21:18.133 "pci_address": "0000:00:11.0", 00:21:18.133 "trid": { 00:21:18.133 "trtype": "PCIe", 00:21:18.133 "traddr": "0000:00:11.0" 00:21:18.133 }, 00:21:18.133 "ctrlr_data": { 00:21:18.133 "cntlid": 0, 00:21:18.133 "vendor_id": "0x1b36", 00:21:18.133 "model_number": "QEMU NVMe Ctrl", 00:21:18.133 "serial_number": "12341", 00:21:18.133 "firmware_revision": "8.0.0", 00:21:18.133 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:18.133 "oacs": { 00:21:18.133 "security": 0, 00:21:18.133 "format": 1, 00:21:18.133 "firmware": 0, 00:21:18.133 "ns_manage": 1 00:21:18.133 }, 00:21:18.133 "multi_ctrlr": false, 00:21:18.133 "ana_reporting": false 00:21:18.133 }, 00:21:18.133 "vs": { 00:21:18.133 "nvme_version": "1.4" 00:21:18.133 }, 00:21:18.133 "ns_data": { 00:21:18.133 "id": 1, 00:21:18.133 "can_share": false 00:21:18.133 } 00:21:18.133 } 00:21:18.133 ], 00:21:18.133 "mp_policy": "active_passive" 00:21:18.133 } 00:21:18.133 } 00:21:18.133 ]' 00:21:18.133 12:47:17 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:18.133 12:47:17 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:18.133 12:47:17 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:18.133 12:47:17 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:18.133 12:47:17 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:18.133 12:47:17 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:18.133 12:47:17 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:18.133 12:47:17 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:18.133 12:47:17 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:18.133 12:47:17 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:18.133 12:47:17 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:18.394 12:47:18 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=bc57d621-984f-450b-b9e7-31f8e6c41ba0 00:21:18.394 12:47:18 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:18.394 12:47:18 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bc57d621-984f-450b-b9e7-31f8e6c41ba0 00:21:18.655 12:47:18 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:18.916 12:47:18 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=433d7624-babd-4c15-9a6d-1db882fc8857 00:21:18.916 12:47:18 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 433d7624-babd-4c15-9a6d-1db882fc8857 00:21:18.916 12:47:18 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9 00:21:18.916 12:47:18 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:18.916 12:47:18 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9 00:21:18.916 12:47:18 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:18.916 12:47:18 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:18.916 12:47:18 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9 00:21:18.916 12:47:18 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:18.916 12:47:18 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9 00:21:18.916 12:47:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9 00:21:18.916 12:47:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:18.916 12:47:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:18.916 12:47:18 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:18.916 12:47:18 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9 00:21:19.177 12:47:18 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:19.177 { 00:21:19.177 "name": "7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9", 00:21:19.177 "aliases": [ 00:21:19.177 "lvs/nvme0n1p0" 00:21:19.177 ], 00:21:19.177 "product_name": "Logical Volume", 00:21:19.177 "block_size": 4096, 00:21:19.177 "num_blocks": 26476544, 00:21:19.177 "uuid": "7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9", 00:21:19.177 "assigned_rate_limits": { 00:21:19.177 "rw_ios_per_sec": 0, 00:21:19.177 "rw_mbytes_per_sec": 0, 00:21:19.177 "r_mbytes_per_sec": 0, 00:21:19.177 "w_mbytes_per_sec": 0 00:21:19.177 }, 00:21:19.177 "claimed": false, 00:21:19.177 "zoned": false, 00:21:19.177 "supported_io_types": { 00:21:19.177 "read": true, 00:21:19.177 "write": true, 00:21:19.177 "unmap": true, 00:21:19.177 "flush": false, 00:21:19.177 "reset": true, 00:21:19.177 "nvme_admin": false, 00:21:19.177 "nvme_io": false, 00:21:19.177 "nvme_io_md": false, 00:21:19.177 "write_zeroes": true, 00:21:19.177 "zcopy": false, 00:21:19.177 "get_zone_info": false, 00:21:19.177 "zone_management": false, 00:21:19.177 "zone_append": false, 00:21:19.177 "compare": false, 00:21:19.177 "compare_and_write": false, 00:21:19.177 "abort": false, 00:21:19.177 "seek_hole": true, 00:21:19.177 "seek_data": true, 00:21:19.177 "copy": false, 00:21:19.177 "nvme_iov_md": false 00:21:19.177 }, 00:21:19.177 "driver_specific": { 00:21:19.177 "lvol": { 00:21:19.177 "lvol_store_uuid": "433d7624-babd-4c15-9a6d-1db882fc8857", 00:21:19.177 "base_bdev": "nvme0n1", 00:21:19.177 "thin_provision": true, 00:21:19.177 "num_allocated_clusters": 0, 00:21:19.177 "snapshot": false, 00:21:19.177 "clone": false, 00:21:19.177 "esnap_clone": false 00:21:19.177 } 00:21:19.177 } 00:21:19.177 } 00:21:19.177 ]' 00:21:19.177 12:47:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:19.177 12:47:18 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:19.177 12:47:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:19.177 12:47:18 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:19.177 12:47:18 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:19.177 12:47:18 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:19.177 12:47:18 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:19.177 12:47:18 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:19.177 12:47:18 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:19.438 12:47:19 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:19.438 12:47:19 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:19.438 12:47:19 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9 00:21:19.438 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9 00:21:19.438 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:19.438 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:19.438 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:19.438 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9 00:21:19.700 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:19.700 { 00:21:19.700 "name": "7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9", 00:21:19.700 "aliases": [ 00:21:19.700 "lvs/nvme0n1p0" 00:21:19.700 ], 00:21:19.700 "product_name": "Logical Volume", 00:21:19.700 "block_size": 4096, 00:21:19.700 "num_blocks": 26476544, 00:21:19.700 "uuid": "7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9", 00:21:19.700 "assigned_rate_limits": { 00:21:19.700 "rw_ios_per_sec": 0, 00:21:19.700 "rw_mbytes_per_sec": 0, 00:21:19.700 "r_mbytes_per_sec": 0, 00:21:19.700 "w_mbytes_per_sec": 0 00:21:19.700 }, 00:21:19.700 "claimed": false, 00:21:19.700 "zoned": false, 00:21:19.700 "supported_io_types": { 00:21:19.700 "read": true, 00:21:19.700 "write": true, 00:21:19.700 "unmap": true, 00:21:19.700 "flush": false, 00:21:19.700 "reset": true, 00:21:19.700 "nvme_admin": false, 00:21:19.700 "nvme_io": false, 00:21:19.700 "nvme_io_md": false, 00:21:19.700 "write_zeroes": true, 00:21:19.700 "zcopy": false, 00:21:19.700 "get_zone_info": false, 00:21:19.700 "zone_management": false, 00:21:19.700 "zone_append": false, 00:21:19.700 "compare": false, 00:21:19.700 "compare_and_write": false, 00:21:19.700 "abort": false, 00:21:19.700 "seek_hole": true, 00:21:19.700 "seek_data": true, 00:21:19.700 "copy": false, 00:21:19.700 "nvme_iov_md": false 00:21:19.700 }, 00:21:19.700 "driver_specific": { 00:21:19.700 "lvol": { 00:21:19.700 "lvol_store_uuid": "433d7624-babd-4c15-9a6d-1db882fc8857", 00:21:19.700 "base_bdev": "nvme0n1", 00:21:19.700 "thin_provision": true, 00:21:19.700 "num_allocated_clusters": 0, 00:21:19.700 "snapshot": false, 00:21:19.700 "clone": false, 00:21:19.700 "esnap_clone": false 00:21:19.700 } 00:21:19.700 } 00:21:19.700 } 00:21:19.700 ]' 00:21:19.700 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:19.700 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:19.700 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:19.700 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:19.700 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:19.700 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:19.700 12:47:19 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:19.700 12:47:19 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:19.961 12:47:19 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:19.961 12:47:19 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9 00:21:19.961 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9 00:21:19.961 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:19.961 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:19.961 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:19.961 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9 00:21:20.222 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:20.222 { 00:21:20.222 "name": "7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9", 00:21:20.222 "aliases": [ 00:21:20.222 "lvs/nvme0n1p0" 00:21:20.222 ], 00:21:20.222 "product_name": "Logical Volume", 00:21:20.222 "block_size": 4096, 00:21:20.222 "num_blocks": 26476544, 00:21:20.222 "uuid": "7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9", 00:21:20.222 "assigned_rate_limits": { 00:21:20.222 "rw_ios_per_sec": 0, 00:21:20.222 "rw_mbytes_per_sec": 0, 00:21:20.222 "r_mbytes_per_sec": 0, 00:21:20.222 "w_mbytes_per_sec": 0 00:21:20.222 }, 00:21:20.222 "claimed": false, 00:21:20.222 "zoned": false, 00:21:20.222 "supported_io_types": { 00:21:20.222 "read": true, 00:21:20.222 "write": true, 00:21:20.222 "unmap": true, 00:21:20.223 "flush": false, 00:21:20.223 "reset": true, 00:21:20.223 "nvme_admin": false, 00:21:20.223 "nvme_io": false, 00:21:20.223 "nvme_io_md": false, 00:21:20.223 "write_zeroes": true, 00:21:20.223 "zcopy": false, 00:21:20.223 "get_zone_info": false, 00:21:20.223 "zone_management": false, 00:21:20.223 "zone_append": false, 00:21:20.223 "compare": false, 00:21:20.223 "compare_and_write": false, 00:21:20.223 "abort": false, 00:21:20.223 "seek_hole": true, 00:21:20.223 "seek_data": true, 00:21:20.223 "copy": false, 00:21:20.223 "nvme_iov_md": false 00:21:20.223 }, 00:21:20.223 "driver_specific": { 00:21:20.223 "lvol": { 00:21:20.223 "lvol_store_uuid": "433d7624-babd-4c15-9a6d-1db882fc8857", 00:21:20.223 "base_bdev": "nvme0n1", 00:21:20.223 "thin_provision": true, 00:21:20.223 "num_allocated_clusters": 0, 00:21:20.223 "snapshot": false, 00:21:20.223 "clone": false, 00:21:20.223 "esnap_clone": false 00:21:20.223 } 00:21:20.223 } 00:21:20.223 } 00:21:20.223 ]' 00:21:20.223 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:20.223 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:20.223 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:20.223 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:20.223 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:20.223 12:47:19 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:20.223 12:47:19 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:20.223 12:47:19 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9 --l2p_dram_limit 10' 00:21:20.223 12:47:19 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:20.223 12:47:19 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:20.223 12:47:19 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:20.223 12:47:19 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:20.223 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:20.223 12:47:19 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7ca27fc5-55cf-46fc-9cd2-2520df0b1bd9 --l2p_dram_limit 10 -c nvc0n1p0 00:21:20.484 [2024-12-14 12:47:20.018880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.484 [2024-12-14 12:47:20.018912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:20.484 [2024-12-14 12:47:20.018925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:20.484 [2024-12-14 12:47:20.018931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.484 [2024-12-14 12:47:20.018969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.484 [2024-12-14 12:47:20.018977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:20.485 [2024-12-14 12:47:20.018985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:20.485 [2024-12-14 12:47:20.018991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.485 [2024-12-14 12:47:20.019010] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:20.485 [2024-12-14 12:47:20.019570] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:20.485 [2024-12-14 12:47:20.019587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.485 [2024-12-14 12:47:20.019594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:20.485 [2024-12-14 12:47:20.019602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:21:20.485 [2024-12-14 12:47:20.019608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.485 [2024-12-14 12:47:20.019656] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c4ed0d54-cf59-4e49-a35e-1ffe69f88ff2 00:21:20.485 [2024-12-14 12:47:20.020578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.485 [2024-12-14 12:47:20.020602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:20.485 [2024-12-14 12:47:20.020611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:20.485 [2024-12-14 12:47:20.020619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.485 [2024-12-14 12:47:20.025360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.485 [2024-12-14 12:47:20.025385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:20.485 [2024-12-14 12:47:20.025392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.705 ms 00:21:20.485 [2024-12-14 12:47:20.025400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.485 [2024-12-14 12:47:20.025462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.485 [2024-12-14 12:47:20.025471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:20.485 [2024-12-14 12:47:20.025478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:20.485 [2024-12-14 12:47:20.025493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.485 [2024-12-14 12:47:20.025532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.485 [2024-12-14 12:47:20.025541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:20.485 [2024-12-14 12:47:20.025547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:20.485 [2024-12-14 12:47:20.025556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.485 [2024-12-14 12:47:20.025571] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:20.485 [2024-12-14 12:47:20.028500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.485 [2024-12-14 12:47:20.028523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:20.485 [2024-12-14 12:47:20.028532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.929 ms 00:21:20.485 [2024-12-14 12:47:20.028538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.485 [2024-12-14 12:47:20.028565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.485 [2024-12-14 12:47:20.028573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:20.485 [2024-12-14 12:47:20.028580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:20.485 [2024-12-14 12:47:20.028586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.485 [2024-12-14 12:47:20.028600] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:20.485 [2024-12-14 12:47:20.028707] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:20.485 [2024-12-14 12:47:20.028719] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:20.485 [2024-12-14 12:47:20.028727] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:20.485 [2024-12-14 12:47:20.028736] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:20.485 [2024-12-14 12:47:20.028743] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:20.485 [2024-12-14 12:47:20.028751] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:20.485 [2024-12-14 12:47:20.028757] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:20.485 [2024-12-14 12:47:20.028767] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:20.485 [2024-12-14 12:47:20.028772] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:20.485 [2024-12-14 12:47:20.028779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.485 [2024-12-14 12:47:20.028789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:20.485 [2024-12-14 12:47:20.028796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:21:20.485 [2024-12-14 12:47:20.028802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.485 [2024-12-14 12:47:20.028870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.485 [2024-12-14 12:47:20.028882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:20.485 [2024-12-14 12:47:20.028890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:20.485 [2024-12-14 12:47:20.028896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.485 [2024-12-14 12:47:20.028971] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:20.485 [2024-12-14 12:47:20.028979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:20.485 [2024-12-14 12:47:20.028986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.485 [2024-12-14 12:47:20.028992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.485 [2024-12-14 12:47:20.028999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:20.485 [2024-12-14 12:47:20.029004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:20.485 [2024-12-14 12:47:20.029011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:20.485 [2024-12-14 12:47:20.029016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:20.485 [2024-12-14 12:47:20.029026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:20.485 [2024-12-14 12:47:20.029031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.485 [2024-12-14 12:47:20.029037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:20.485 [2024-12-14 12:47:20.029044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:20.485 [2024-12-14 12:47:20.029051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:20.485 [2024-12-14 12:47:20.029069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:20.485 [2024-12-14 12:47:20.029077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:20.485 [2024-12-14 12:47:20.029082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.485 [2024-12-14 12:47:20.029090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:20.485 [2024-12-14 12:47:20.029095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:20.485 [2024-12-14 12:47:20.029102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.485 [2024-12-14 12:47:20.029107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:20.485 [2024-12-14 12:47:20.029113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:20.485 [2024-12-14 12:47:20.029118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.485 [2024-12-14 12:47:20.029125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:20.485 [2024-12-14 12:47:20.029130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:20.485 [2024-12-14 12:47:20.029136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.485 [2024-12-14 12:47:20.029141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:20.485 [2024-12-14 12:47:20.029147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:20.485 [2024-12-14 12:47:20.029152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.485 [2024-12-14 12:47:20.029159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:20.485 [2024-12-14 12:47:20.029164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:20.485 [2024-12-14 12:47:20.029170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:20.485 [2024-12-14 12:47:20.029176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:20.485 [2024-12-14 12:47:20.029183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:20.485 [2024-12-14 12:47:20.029189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.485 [2024-12-14 12:47:20.029196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:20.485 [2024-12-14 12:47:20.029202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:20.485 [2024-12-14 12:47:20.029209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:20.485 [2024-12-14 12:47:20.029214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:20.485 [2024-12-14 12:47:20.029221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:20.485 [2024-12-14 12:47:20.029226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.485 [2024-12-14 12:47:20.029232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:20.485 [2024-12-14 12:47:20.029237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:20.485 [2024-12-14 12:47:20.029243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.485 [2024-12-14 12:47:20.029249] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:20.485 [2024-12-14 12:47:20.029256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:20.485 [2024-12-14 12:47:20.029262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:20.485 [2024-12-14 12:47:20.029268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:20.485 [2024-12-14 12:47:20.029274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:20.485 [2024-12-14 12:47:20.029282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:20.485 [2024-12-14 12:47:20.029287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:20.485 [2024-12-14 12:47:20.029294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:20.485 [2024-12-14 12:47:20.029299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:20.485 [2024-12-14 12:47:20.029305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:20.485 [2024-12-14 12:47:20.029311] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:20.485 [2024-12-14 12:47:20.029320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.486 [2024-12-14 12:47:20.029327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:20.486 [2024-12-14 12:47:20.029334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:20.486 [2024-12-14 12:47:20.029340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:20.486 [2024-12-14 12:47:20.029347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:20.486 [2024-12-14 12:47:20.029353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:20.486 [2024-12-14 12:47:20.029362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:20.486 [2024-12-14 12:47:20.029368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:20.486 [2024-12-14 12:47:20.029374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:20.486 [2024-12-14 12:47:20.029379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:20.486 [2024-12-14 12:47:20.029388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:20.486 [2024-12-14 12:47:20.029395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:20.486 [2024-12-14 12:47:20.029401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:20.486 [2024-12-14 12:47:20.029406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:20.486 [2024-12-14 12:47:20.029413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:20.486 [2024-12-14 12:47:20.029418] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:20.486 [2024-12-14 12:47:20.029426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:20.486 [2024-12-14 12:47:20.029432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:20.486 [2024-12-14 12:47:20.029438] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:20.486 [2024-12-14 12:47:20.029444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:20.486 [2024-12-14 12:47:20.029451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:20.486 [2024-12-14 12:47:20.029457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.486 [2024-12-14 12:47:20.029464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:20.486 [2024-12-14 12:47:20.029469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:21:20.486 [2024-12-14 12:47:20.029476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.486 [2024-12-14 12:47:20.029517] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:20.486 [2024-12-14 12:47:20.029529] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:23.789 [2024-12-14 12:47:23.094876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.094929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:23.790 [2024-12-14 12:47:23.094944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3065.347 ms 00:21:23.790 [2024-12-14 12:47:23.094954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.120908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.120952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:23.790 [2024-12-14 12:47:23.120965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.736 ms 00:21:23.790 [2024-12-14 12:47:23.120974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.121102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.121115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:23.790 [2024-12-14 12:47:23.121125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:23.790 [2024-12-14 12:47:23.121139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.152031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.152074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:23.790 [2024-12-14 12:47:23.152084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.846 ms 00:21:23.790 [2024-12-14 12:47:23.152094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.152122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.152137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:23.790 [2024-12-14 12:47:23.152145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:23.790 [2024-12-14 12:47:23.152160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.152524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.152545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:23.790 [2024-12-14 12:47:23.152555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:21:23.790 [2024-12-14 12:47:23.152565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.152667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.152678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:23.790 [2024-12-14 12:47:23.152690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:23.790 [2024-12-14 12:47:23.152701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.167422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.167456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:23.790 [2024-12-14 12:47:23.167466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.704 ms 00:21:23.790 [2024-12-14 12:47:23.167475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.189962] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:23.790 [2024-12-14 12:47:23.193420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.193455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:23.790 [2024-12-14 12:47:23.193473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.871 ms 00:21:23.790 [2024-12-14 12:47:23.193485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.271879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.271921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:23.790 [2024-12-14 12:47:23.271937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.344 ms 00:21:23.790 [2024-12-14 12:47:23.271946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.272147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.272162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:23.790 [2024-12-14 12:47:23.272176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:21:23.790 [2024-12-14 12:47:23.272184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.296753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.296791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:23.790 [2024-12-14 12:47:23.296805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.517 ms 00:21:23.790 [2024-12-14 12:47:23.296813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.321095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.321133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:23.790 [2024-12-14 12:47:23.321147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.230 ms 00:21:23.790 [2024-12-14 12:47:23.321155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.321787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.321810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:23.790 [2024-12-14 12:47:23.321823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:21:23.790 [2024-12-14 12:47:23.321833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.403919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.403967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:23.790 [2024-12-14 12:47:23.403987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.021 ms 00:21:23.790 [2024-12-14 12:47:23.403997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.431355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.431397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:23.790 [2024-12-14 12:47:23.431413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.246 ms 00:21:23.790 [2024-12-14 12:47:23.431422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.457216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.457259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:23.790 [2024-12-14 12:47:23.457273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.738 ms 00:21:23.790 [2024-12-14 12:47:23.457281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.483442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.483487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:23.790 [2024-12-14 12:47:23.483502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.105 ms 00:21:23.790 [2024-12-14 12:47:23.483511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.483567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.483578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:23.790 [2024-12-14 12:47:23.483593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:23.790 [2024-12-14 12:47:23.483602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.483706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.790 [2024-12-14 12:47:23.483720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:23.790 [2024-12-14 12:47:23.483731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:21:23.790 [2024-12-14 12:47:23.483739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.790 [2024-12-14 12:47:23.484920] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3465.508 ms, result 0 00:21:23.790 { 00:21:23.790 "name": "ftl0", 00:21:23.790 "uuid": "c4ed0d54-cf59-4e49-a35e-1ffe69f88ff2" 00:21:23.790 } 00:21:23.790 12:47:23 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:23.790 12:47:23 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:24.052 12:47:23 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:24.052 12:47:23 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:24.313 [2024-12-14 12:47:23.940256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.313 [2024-12-14 12:47:23.940310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:24.313 [2024-12-14 12:47:23.940322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:24.313 [2024-12-14 12:47:23.940332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.313 [2024-12-14 12:47:23.940357] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:24.313 [2024-12-14 12:47:23.943374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.313 [2024-12-14 12:47:23.943410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:24.313 [2024-12-14 12:47:23.943424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.995 ms 00:21:24.313 [2024-12-14 12:47:23.943433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.313 [2024-12-14 12:47:23.943704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.313 [2024-12-14 12:47:23.943718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:24.313 [2024-12-14 12:47:23.943730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:21:24.313 [2024-12-14 12:47:23.943738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.313 [2024-12-14 12:47:23.946984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.313 [2024-12-14 12:47:23.947008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:24.313 [2024-12-14 12:47:23.947020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:21:24.313 [2024-12-14 12:47:23.947029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.313 [2024-12-14 12:47:23.953156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.313 [2024-12-14 12:47:23.953192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:24.313 [2024-12-14 12:47:23.953209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.101 ms 00:21:24.313 [2024-12-14 12:47:23.953218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.313 [2024-12-14 12:47:23.978338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.313 [2024-12-14 12:47:23.978381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:24.313 [2024-12-14 12:47:23.978396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.035 ms 00:21:24.313 [2024-12-14 12:47:23.978404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.313 [2024-12-14 12:47:23.996756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.313 [2024-12-14 12:47:23.996798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:24.313 [2024-12-14 12:47:23.996813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.201 ms 00:21:24.313 [2024-12-14 12:47:23.996822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.313 [2024-12-14 12:47:23.996994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.313 [2024-12-14 12:47:23.997008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:24.313 [2024-12-14 12:47:23.997020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:21:24.313 [2024-12-14 12:47:23.997028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.313 [2024-12-14 12:47:24.022656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.313 [2024-12-14 12:47:24.022698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:24.313 [2024-12-14 12:47:24.022712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.599 ms 00:21:24.313 [2024-12-14 12:47:24.022720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.313 [2024-12-14 12:47:24.048242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.313 [2024-12-14 12:47:24.048283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:24.313 [2024-12-14 12:47:24.048297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.470 ms 00:21:24.313 [2024-12-14 12:47:24.048304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.575 [2024-12-14 12:47:24.073030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.575 [2024-12-14 12:47:24.073077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:24.575 [2024-12-14 12:47:24.073090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.668 ms 00:21:24.575 [2024-12-14 12:47:24.073098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.575 [2024-12-14 12:47:24.097574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.576 [2024-12-14 12:47:24.097623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:24.576 [2024-12-14 12:47:24.097637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.367 ms 00:21:24.576 [2024-12-14 12:47:24.097645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.576 [2024-12-14 12:47:24.097694] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:24.576 [2024-12-14 12:47:24.097710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.097997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:24.576 [2024-12-14 12:47:24.098508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:24.577 [2024-12-14 12:47:24.098660] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:24.577 [2024-12-14 12:47:24.098670] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c4ed0d54-cf59-4e49-a35e-1ffe69f88ff2 00:21:24.577 [2024-12-14 12:47:24.098679] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:24.577 [2024-12-14 12:47:24.098694] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:24.577 [2024-12-14 12:47:24.098705] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:24.577 [2024-12-14 12:47:24.098716] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:24.577 [2024-12-14 12:47:24.098724] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:24.577 [2024-12-14 12:47:24.098734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:24.577 [2024-12-14 12:47:24.098741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:24.577 [2024-12-14 12:47:24.098749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:24.577 [2024-12-14 12:47:24.098756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:24.577 [2024-12-14 12:47:24.098765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.577 [2024-12-14 12:47:24.098773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:24.577 [2024-12-14 12:47:24.098785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:21:24.577 [2024-12-14 12:47:24.098794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.577 [2024-12-14 12:47:24.112282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.577 [2024-12-14 12:47:24.112320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:24.577 [2024-12-14 12:47:24.112333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.443 ms 00:21:24.577 [2024-12-14 12:47:24.112341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.577 [2024-12-14 12:47:24.112746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.577 [2024-12-14 12:47:24.112767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:24.577 [2024-12-14 12:47:24.112782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:21:24.577 [2024-12-14 12:47:24.112789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.577 [2024-12-14 12:47:24.159223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.577 [2024-12-14 12:47:24.159269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:24.577 [2024-12-14 12:47:24.159284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.577 [2024-12-14 12:47:24.159294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.577 [2024-12-14 12:47:24.159364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.577 [2024-12-14 12:47:24.159373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:24.577 [2024-12-14 12:47:24.159387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.577 [2024-12-14 12:47:24.159396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.577 [2024-12-14 12:47:24.159493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.577 [2024-12-14 12:47:24.159505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:24.577 [2024-12-14 12:47:24.159516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.577 [2024-12-14 12:47:24.159524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.577 [2024-12-14 12:47:24.159547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.577 [2024-12-14 12:47:24.159559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:24.577 [2024-12-14 12:47:24.159571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.577 [2024-12-14 12:47:24.159581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.577 [2024-12-14 12:47:24.244984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.577 [2024-12-14 12:47:24.245035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:24.577 [2024-12-14 12:47:24.245051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.577 [2024-12-14 12:47:24.245080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.839 [2024-12-14 12:47:24.314556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.839 [2024-12-14 12:47:24.314604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:24.839 [2024-12-14 12:47:24.314618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.839 [2024-12-14 12:47:24.314630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.839 [2024-12-14 12:47:24.314717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.839 [2024-12-14 12:47:24.314728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:24.839 [2024-12-14 12:47:24.314739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.839 [2024-12-14 12:47:24.314748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.839 [2024-12-14 12:47:24.314823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.839 [2024-12-14 12:47:24.314836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:24.839 [2024-12-14 12:47:24.314847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.839 [2024-12-14 12:47:24.314855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.839 [2024-12-14 12:47:24.314961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.839 [2024-12-14 12:47:24.314974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:24.839 [2024-12-14 12:47:24.314985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.839 [2024-12-14 12:47:24.314995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.839 [2024-12-14 12:47:24.315032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.839 [2024-12-14 12:47:24.315042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:24.839 [2024-12-14 12:47:24.315053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.839 [2024-12-14 12:47:24.315092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.839 [2024-12-14 12:47:24.315139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.839 [2024-12-14 12:47:24.315150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:24.839 [2024-12-14 12:47:24.315160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.839 [2024-12-14 12:47:24.315168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.839 [2024-12-14 12:47:24.315220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:24.839 [2024-12-14 12:47:24.315232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:24.839 [2024-12-14 12:47:24.315244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:24.839 [2024-12-14 12:47:24.315253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.839 [2024-12-14 12:47:24.315401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.097 ms, result 0 00:21:24.839 true 00:21:24.839 12:47:24 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79035 00:21:24.839 12:47:24 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79035 ']' 00:21:24.839 12:47:24 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79035 00:21:24.839 12:47:24 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:24.839 12:47:24 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.839 12:47:24 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79035 00:21:24.839 12:47:24 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.839 killing process with pid 79035 00:21:24.839 12:47:24 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.839 12:47:24 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79035' 00:21:24.839 12:47:24 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79035 00:21:24.839 12:47:24 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79035 00:21:31.428 12:47:30 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:34.733 262144+0 records in 00:21:34.733 262144+0 records out 00:21:34.733 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.01512 s, 267 MB/s 00:21:34.733 12:47:34 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:37.282 12:47:36 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:37.282 [2024-12-14 12:47:36.593187] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:37.282 [2024-12-14 12:47:36.593272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79266 ] 00:21:37.282 [2024-12-14 12:47:36.744650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.282 [2024-12-14 12:47:36.846278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.542 [2024-12-14 12:47:37.137853] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:37.542 [2024-12-14 12:47:37.137935] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:37.806 [2024-12-14 12:47:37.298971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.806 [2024-12-14 12:47:37.299038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:37.806 [2024-12-14 12:47:37.299067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:37.806 [2024-12-14 12:47:37.299076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.806 [2024-12-14 12:47:37.299130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.806 [2024-12-14 12:47:37.299145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:37.806 [2024-12-14 12:47:37.299154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:37.806 [2024-12-14 12:47:37.299162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.806 [2024-12-14 12:47:37.299183] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:37.806 [2024-12-14 12:47:37.299907] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:37.806 [2024-12-14 12:47:37.299935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.806 [2024-12-14 12:47:37.299944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:37.806 [2024-12-14 12:47:37.299954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:21:37.806 [2024-12-14 12:47:37.299962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.806 [2024-12-14 12:47:37.301606] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:37.806 [2024-12-14 12:47:37.315927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.806 [2024-12-14 12:47:37.315977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:37.806 [2024-12-14 12:47:37.315992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.323 ms 00:21:37.806 [2024-12-14 12:47:37.316000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.806 [2024-12-14 12:47:37.316090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.806 [2024-12-14 12:47:37.316101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:37.806 [2024-12-14 12:47:37.316111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:37.806 [2024-12-14 12:47:37.316119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.806 [2024-12-14 12:47:37.323948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.806 [2024-12-14 12:47:37.323992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:37.806 [2024-12-14 12:47:37.324003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.752 ms 00:21:37.806 [2024-12-14 12:47:37.324017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.806 [2024-12-14 12:47:37.324111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.806 [2024-12-14 12:47:37.324120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:37.806 [2024-12-14 12:47:37.324130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:37.806 [2024-12-14 12:47:37.324153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.806 [2024-12-14 12:47:37.324197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.806 [2024-12-14 12:47:37.324208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:37.806 [2024-12-14 12:47:37.324217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:37.806 [2024-12-14 12:47:37.324225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.806 [2024-12-14 12:47:37.324252] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:37.806 [2024-12-14 12:47:37.328327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.806 [2024-12-14 12:47:37.328370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:37.806 [2024-12-14 12:47:37.328383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.080 ms 00:21:37.806 [2024-12-14 12:47:37.328391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.806 [2024-12-14 12:47:37.328428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.806 [2024-12-14 12:47:37.328438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:37.806 [2024-12-14 12:47:37.328447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:37.806 [2024-12-14 12:47:37.328455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.806 [2024-12-14 12:47:37.328505] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:37.806 [2024-12-14 12:47:37.328530] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:37.806 [2024-12-14 12:47:37.328569] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:37.806 [2024-12-14 12:47:37.328590] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:37.806 [2024-12-14 12:47:37.328697] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:37.806 [2024-12-14 12:47:37.328707] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:37.806 [2024-12-14 12:47:37.328719] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:37.806 [2024-12-14 12:47:37.328730] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:37.806 [2024-12-14 12:47:37.328740] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:37.806 [2024-12-14 12:47:37.328748] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:37.806 [2024-12-14 12:47:37.328756] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:37.806 [2024-12-14 12:47:37.328764] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:37.806 [2024-12-14 12:47:37.328775] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:37.806 [2024-12-14 12:47:37.328783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.806 [2024-12-14 12:47:37.328792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:37.806 [2024-12-14 12:47:37.328801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:21:37.806 [2024-12-14 12:47:37.328809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.806 [2024-12-14 12:47:37.328893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.806 [2024-12-14 12:47:37.328904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:37.806 [2024-12-14 12:47:37.328912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:37.806 [2024-12-14 12:47:37.328921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.806 [2024-12-14 12:47:37.329023] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:37.806 [2024-12-14 12:47:37.329044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:37.806 [2024-12-14 12:47:37.329079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:37.806 [2024-12-14 12:47:37.329090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.806 [2024-12-14 12:47:37.329099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:37.806 [2024-12-14 12:47:37.329107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:37.806 [2024-12-14 12:47:37.329115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:37.806 [2024-12-14 12:47:37.329123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:37.806 [2024-12-14 12:47:37.329131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:37.806 [2024-12-14 12:47:37.329139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:37.806 [2024-12-14 12:47:37.329147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:37.806 [2024-12-14 12:47:37.329157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:37.806 [2024-12-14 12:47:37.329166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:37.806 [2024-12-14 12:47:37.329181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:37.806 [2024-12-14 12:47:37.329188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:37.806 [2024-12-14 12:47:37.329195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.806 [2024-12-14 12:47:37.329205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:37.806 [2024-12-14 12:47:37.329213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:37.806 [2024-12-14 12:47:37.329220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.806 [2024-12-14 12:47:37.329227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:37.806 [2024-12-14 12:47:37.329235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:37.806 [2024-12-14 12:47:37.329241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.806 [2024-12-14 12:47:37.329248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:37.806 [2024-12-14 12:47:37.329255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:37.806 [2024-12-14 12:47:37.329261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.806 [2024-12-14 12:47:37.329268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:37.806 [2024-12-14 12:47:37.329275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:37.807 [2024-12-14 12:47:37.329282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.807 [2024-12-14 12:47:37.329288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:37.807 [2024-12-14 12:47:37.329295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:37.807 [2024-12-14 12:47:37.329304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.807 [2024-12-14 12:47:37.329311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:37.807 [2024-12-14 12:47:37.329317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:37.807 [2024-12-14 12:47:37.329323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:37.807 [2024-12-14 12:47:37.329329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:37.807 [2024-12-14 12:47:37.329336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:37.807 [2024-12-14 12:47:37.329342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:37.807 [2024-12-14 12:47:37.329349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:37.807 [2024-12-14 12:47:37.329358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:37.807 [2024-12-14 12:47:37.329366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.807 [2024-12-14 12:47:37.329372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:37.807 [2024-12-14 12:47:37.329378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:37.807 [2024-12-14 12:47:37.329385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.807 [2024-12-14 12:47:37.329391] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:37.807 [2024-12-14 12:47:37.329400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:37.807 [2024-12-14 12:47:37.329408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:37.807 [2024-12-14 12:47:37.329416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.807 [2024-12-14 12:47:37.329424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:37.807 [2024-12-14 12:47:37.329432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:37.807 [2024-12-14 12:47:37.329439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:37.807 [2024-12-14 12:47:37.329446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:37.807 [2024-12-14 12:47:37.329452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:37.807 [2024-12-14 12:47:37.329459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:37.807 [2024-12-14 12:47:37.329468] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:37.807 [2024-12-14 12:47:37.329477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:37.807 [2024-12-14 12:47:37.329489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:37.807 [2024-12-14 12:47:37.329498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:37.807 [2024-12-14 12:47:37.329506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:37.807 [2024-12-14 12:47:37.329513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:37.807 [2024-12-14 12:47:37.329521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:37.807 [2024-12-14 12:47:37.329528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:37.807 [2024-12-14 12:47:37.329535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:37.807 [2024-12-14 12:47:37.329542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:37.807 [2024-12-14 12:47:37.329549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:37.807 [2024-12-14 12:47:37.329556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:37.807 [2024-12-14 12:47:37.329564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:37.807 [2024-12-14 12:47:37.329583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:37.807 [2024-12-14 12:47:37.329591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:37.807 [2024-12-14 12:47:37.329599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:37.807 [2024-12-14 12:47:37.329607] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:37.807 [2024-12-14 12:47:37.329615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:37.807 [2024-12-14 12:47:37.329623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:37.807 [2024-12-14 12:47:37.329630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:37.807 [2024-12-14 12:47:37.329638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:37.807 [2024-12-14 12:47:37.329646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:37.807 [2024-12-14 12:47:37.329653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.807 [2024-12-14 12:47:37.329664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:37.807 [2024-12-14 12:47:37.329673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:21:37.807 [2024-12-14 12:47:37.329681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.807 [2024-12-14 12:47:37.361159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.807 [2024-12-14 12:47:37.361208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:37.807 [2024-12-14 12:47:37.361220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.432 ms 00:21:37.807 [2024-12-14 12:47:37.361233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.807 [2024-12-14 12:47:37.361324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.807 [2024-12-14 12:47:37.361334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:37.807 [2024-12-14 12:47:37.361343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:37.807 [2024-12-14 12:47:37.361351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.807 [2024-12-14 12:47:37.407330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.807 [2024-12-14 12:47:37.407385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:37.807 [2024-12-14 12:47:37.407399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.920 ms 00:21:37.807 [2024-12-14 12:47:37.407408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.807 [2024-12-14 12:47:37.407455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.807 [2024-12-14 12:47:37.407466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:37.807 [2024-12-14 12:47:37.407480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:37.807 [2024-12-14 12:47:37.407488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.807 [2024-12-14 12:47:37.408129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.807 [2024-12-14 12:47:37.408163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:37.807 [2024-12-14 12:47:37.408175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:21:37.807 [2024-12-14 12:47:37.408184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.807 [2024-12-14 12:47:37.408348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.807 [2024-12-14 12:47:37.408360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:37.807 [2024-12-14 12:47:37.408371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:21:37.807 [2024-12-14 12:47:37.408380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.807 [2024-12-14 12:47:37.423963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.807 [2024-12-14 12:47:37.424013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:37.807 [2024-12-14 12:47:37.424025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.565 ms 00:21:37.807 [2024-12-14 12:47:37.424033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.807 [2024-12-14 12:47:37.438076] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:37.807 [2024-12-14 12:47:37.438125] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:37.807 [2024-12-14 12:47:37.438139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.807 [2024-12-14 12:47:37.438148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:37.807 [2024-12-14 12:47:37.438157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.975 ms 00:21:37.807 [2024-12-14 12:47:37.438165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.807 [2024-12-14 12:47:37.463808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.807 [2024-12-14 12:47:37.463862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:37.807 [2024-12-14 12:47:37.463874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.591 ms 00:21:37.807 [2024-12-14 12:47:37.463883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.807 [2024-12-14 12:47:37.476695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.807 [2024-12-14 12:47:37.476742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:37.808 [2024-12-14 12:47:37.476754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.760 ms 00:21:37.808 [2024-12-14 12:47:37.476762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.808 [2024-12-14 12:47:37.489158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.808 [2024-12-14 12:47:37.489203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:37.808 [2024-12-14 12:47:37.489216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.350 ms 00:21:37.808 [2024-12-14 12:47:37.489224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.808 [2024-12-14 12:47:37.489868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.808 [2024-12-14 12:47:37.489908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:37.808 [2024-12-14 12:47:37.489918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:21:37.808 [2024-12-14 12:47:37.489930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.070 [2024-12-14 12:47:37.553163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.070 [2024-12-14 12:47:37.553223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:38.070 [2024-12-14 12:47:37.553238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.212 ms 00:21:38.070 [2024-12-14 12:47:37.553254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.070 [2024-12-14 12:47:37.564332] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:38.070 [2024-12-14 12:47:37.567215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.070 [2024-12-14 12:47:37.567252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:38.070 [2024-12-14 12:47:37.567264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.904 ms 00:21:38.070 [2024-12-14 12:47:37.567275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.070 [2024-12-14 12:47:37.567365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.070 [2024-12-14 12:47:37.567376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:38.070 [2024-12-14 12:47:37.567387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:21:38.070 [2024-12-14 12:47:37.567396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.070 [2024-12-14 12:47:37.567469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.070 [2024-12-14 12:47:37.567482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:38.070 [2024-12-14 12:47:37.567492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:21:38.070 [2024-12-14 12:47:37.567500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.070 [2024-12-14 12:47:37.567520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.070 [2024-12-14 12:47:37.567529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:38.070 [2024-12-14 12:47:37.567538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:38.070 [2024-12-14 12:47:37.567546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.070 [2024-12-14 12:47:37.567583] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:38.070 [2024-12-14 12:47:37.567598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.070 [2024-12-14 12:47:37.567606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:38.070 [2024-12-14 12:47:37.567614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:38.070 [2024-12-14 12:47:37.567622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.070 [2024-12-14 12:47:37.593149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.070 [2024-12-14 12:47:37.593194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:38.070 [2024-12-14 12:47:37.593207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.510 ms 00:21:38.070 [2024-12-14 12:47:37.593223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.070 [2024-12-14 12:47:37.593310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:38.070 [2024-12-14 12:47:37.593322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:38.070 [2024-12-14 12:47:37.593333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:38.070 [2024-12-14 12:47:37.593341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:38.070 [2024-12-14 12:47:37.594648] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.160 ms, result 0 00:21:39.017  [2024-12-14T12:47:39.699Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-14T12:47:40.643Z] Copying: 41/1024 [MB] (30 MBps) [2024-12-14T12:47:42.061Z] Copying: 57/1024 [MB] (15 MBps) [2024-12-14T12:47:42.665Z] Copying: 72/1024 [MB] (15 MBps) [2024-12-14T12:47:43.621Z] Copying: 83/1024 [MB] (10 MBps) [2024-12-14T12:47:45.009Z] Copying: 102/1024 [MB] (18 MBps) [2024-12-14T12:47:45.953Z] Copying: 121/1024 [MB] (19 MBps) [2024-12-14T12:47:46.898Z] Copying: 139/1024 [MB] (18 MBps) [2024-12-14T12:47:47.841Z] Copying: 150/1024 [MB] (10 MBps) [2024-12-14T12:47:48.787Z] Copying: 163/1024 [MB] (13 MBps) [2024-12-14T12:47:49.731Z] Copying: 175/1024 [MB] (11 MBps) [2024-12-14T12:47:50.674Z] Copying: 185/1024 [MB] (10 MBps) [2024-12-14T12:47:51.619Z] Copying: 212/1024 [MB] (26 MBps) [2024-12-14T12:47:53.006Z] Copying: 224/1024 [MB] (11 MBps) [2024-12-14T12:47:53.951Z] Copying: 235/1024 [MB] (11 MBps) [2024-12-14T12:47:54.896Z] Copying: 280/1024 [MB] (44 MBps) [2024-12-14T12:47:55.838Z] Copying: 298/1024 [MB] (17 MBps) [2024-12-14T12:47:56.782Z] Copying: 314/1024 [MB] (15 MBps) [2024-12-14T12:47:57.727Z] Copying: 332/1024 [MB] (18 MBps) [2024-12-14T12:47:58.672Z] Copying: 354/1024 [MB] (22 MBps) [2024-12-14T12:47:59.616Z] Copying: 370/1024 [MB] (15 MBps) [2024-12-14T12:48:01.009Z] Copying: 380/1024 [MB] (10 MBps) [2024-12-14T12:48:01.952Z] Copying: 398/1024 [MB] (18 MBps) [2024-12-14T12:48:02.896Z] Copying: 425/1024 [MB] (26 MBps) [2024-12-14T12:48:03.839Z] Copying: 474/1024 [MB] (48 MBps) [2024-12-14T12:48:04.784Z] Copying: 498/1024 [MB] (24 MBps) [2024-12-14T12:48:05.726Z] Copying: 513/1024 [MB] (14 MBps) [2024-12-14T12:48:06.669Z] Copying: 533/1024 [MB] (20 MBps) [2024-12-14T12:48:07.613Z] Copying: 573/1024 [MB] (39 MBps) [2024-12-14T12:48:09.002Z] Copying: 622/1024 [MB] (49 MBps) [2024-12-14T12:48:09.948Z] Copying: 640/1024 [MB] (18 MBps) [2024-12-14T12:48:10.890Z] Copying: 661/1024 [MB] (20 MBps) [2024-12-14T12:48:11.911Z] Copying: 678/1024 [MB] (17 MBps) [2024-12-14T12:48:12.856Z] Copying: 693/1024 [MB] (14 MBps) [2024-12-14T12:48:13.804Z] Copying: 711/1024 [MB] (18 MBps) [2024-12-14T12:48:14.748Z] Copying: 724/1024 [MB] (12 MBps) [2024-12-14T12:48:15.698Z] Copying: 740/1024 [MB] (16 MBps) [2024-12-14T12:48:16.646Z] Copying: 753/1024 [MB] (12 MBps) [2024-12-14T12:48:18.032Z] Copying: 764/1024 [MB] (10 MBps) [2024-12-14T12:48:18.978Z] Copying: 774/1024 [MB] (10 MBps) [2024-12-14T12:48:19.923Z] Copying: 784/1024 [MB] (10 MBps) [2024-12-14T12:48:20.867Z] Copying: 796/1024 [MB] (11 MBps) [2024-12-14T12:48:21.812Z] Copying: 806/1024 [MB] (10 MBps) [2024-12-14T12:48:22.757Z] Copying: 818/1024 [MB] (11 MBps) [2024-12-14T12:48:23.702Z] Copying: 829/1024 [MB] (11 MBps) [2024-12-14T12:48:24.646Z] Copying: 842/1024 [MB] (13 MBps) [2024-12-14T12:48:25.644Z] Copying: 853/1024 [MB] (10 MBps) [2024-12-14T12:48:27.034Z] Copying: 863/1024 [MB] (10 MBps) [2024-12-14T12:48:27.607Z] Copying: 874/1024 [MB] (10 MBps) [2024-12-14T12:48:28.996Z] Copying: 884/1024 [MB] (10 MBps) [2024-12-14T12:48:29.954Z] Copying: 895/1024 [MB] (10 MBps) [2024-12-14T12:48:30.900Z] Copying: 912/1024 [MB] (17 MBps) [2024-12-14T12:48:31.843Z] Copying: 942/1024 [MB] (29 MBps) [2024-12-14T12:48:32.787Z] Copying: 953/1024 [MB] (11 MBps) [2024-12-14T12:48:33.730Z] Copying: 973/1024 [MB] (19 MBps) [2024-12-14T12:48:34.675Z] Copying: 985/1024 [MB] (11 MBps) [2024-12-14T12:48:35.622Z] Copying: 998/1024 [MB] (13 MBps) [2024-12-14T12:48:36.566Z] Copying: 1010/1024 [MB] (12 MBps) [2024-12-14T12:48:36.566Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-14 12:48:36.451780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.829 [2024-12-14 12:48:36.451835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:36.829 [2024-12-14 12:48:36.451850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:36.829 [2024-12-14 12:48:36.451859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.830 [2024-12-14 12:48:36.451881] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:36.830 [2024-12-14 12:48:36.454923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.830 [2024-12-14 12:48:36.454964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:36.830 [2024-12-14 12:48:36.454975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.026 ms 00:22:36.830 [2024-12-14 12:48:36.454991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.830 [2024-12-14 12:48:36.457252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.830 [2024-12-14 12:48:36.457299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:36.830 [2024-12-14 12:48:36.457311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.233 ms 00:22:36.830 [2024-12-14 12:48:36.457319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.830 [2024-12-14 12:48:36.474935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.830 [2024-12-14 12:48:36.474988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:36.830 [2024-12-14 12:48:36.475000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.599 ms 00:22:36.830 [2024-12-14 12:48:36.475008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.830 [2024-12-14 12:48:36.481107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.830 [2024-12-14 12:48:36.481147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:36.830 [2024-12-14 12:48:36.481159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.053 ms 00:22:36.830 [2024-12-14 12:48:36.481167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.830 [2024-12-14 12:48:36.507022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.830 [2024-12-14 12:48:36.507078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:36.830 [2024-12-14 12:48:36.507089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.805 ms 00:22:36.830 [2024-12-14 12:48:36.507097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.830 [2024-12-14 12:48:36.523456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.830 [2024-12-14 12:48:36.523504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:36.830 [2024-12-14 12:48:36.523517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.314 ms 00:22:36.830 [2024-12-14 12:48:36.523525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.830 [2024-12-14 12:48:36.523686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.830 [2024-12-14 12:48:36.523703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:36.830 [2024-12-14 12:48:36.523713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:22:36.830 [2024-12-14 12:48:36.523722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:36.830 [2024-12-14 12:48:36.549099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:36.830 [2024-12-14 12:48:36.549144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:36.830 [2024-12-14 12:48:36.549156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.361 ms 00:22:36.830 [2024-12-14 12:48:36.549164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.093 [2024-12-14 12:48:36.573957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.093 [2024-12-14 12:48:36.574004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:37.093 [2024-12-14 12:48:36.574015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.749 ms 00:22:37.093 [2024-12-14 12:48:36.574023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.093 [2024-12-14 12:48:36.598287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.093 [2024-12-14 12:48:36.598334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:37.093 [2024-12-14 12:48:36.598345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.211 ms 00:22:37.093 [2024-12-14 12:48:36.598353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.093 [2024-12-14 12:48:36.622271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.093 [2024-12-14 12:48:36.622316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:37.093 [2024-12-14 12:48:36.622327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.847 ms 00:22:37.093 [2024-12-14 12:48:36.622334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.093 [2024-12-14 12:48:36.622377] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:37.093 [2024-12-14 12:48:36.622394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:37.093 [2024-12-14 12:48:36.622744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.622753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.622761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.622980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.622988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.622995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:37.094 [2024-12-14 12:48:36.623430] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:37.094 [2024-12-14 12:48:36.623441] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c4ed0d54-cf59-4e49-a35e-1ffe69f88ff2 00:22:37.094 [2024-12-14 12:48:36.623450] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:37.094 [2024-12-14 12:48:36.623458] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:37.094 [2024-12-14 12:48:36.623465] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:37.094 [2024-12-14 12:48:36.623474] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:37.094 [2024-12-14 12:48:36.623482] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:37.094 [2024-12-14 12:48:36.623497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:37.094 [2024-12-14 12:48:36.623505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:37.094 [2024-12-14 12:48:36.623511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:37.094 [2024-12-14 12:48:36.623518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:37.094 [2024-12-14 12:48:36.623525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.094 [2024-12-14 12:48:36.623532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:37.094 [2024-12-14 12:48:36.623543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:22:37.094 [2024-12-14 12:48:36.623551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.094 [2024-12-14 12:48:36.636921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.094 [2024-12-14 12:48:36.636965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:37.094 [2024-12-14 12:48:36.636976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.348 ms 00:22:37.094 [2024-12-14 12:48:36.636984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.095 [2024-12-14 12:48:36.637407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.095 [2024-12-14 12:48:36.637435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:37.095 [2024-12-14 12:48:36.637444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:22:37.095 [2024-12-14 12:48:36.637458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.095 [2024-12-14 12:48:36.673379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.095 [2024-12-14 12:48:36.673429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:37.095 [2024-12-14 12:48:36.673442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.095 [2024-12-14 12:48:36.673453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.095 [2024-12-14 12:48:36.673520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.095 [2024-12-14 12:48:36.673530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:37.095 [2024-12-14 12:48:36.673562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.095 [2024-12-14 12:48:36.673576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.095 [2024-12-14 12:48:36.673647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.095 [2024-12-14 12:48:36.673660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:37.095 [2024-12-14 12:48:36.673671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.095 [2024-12-14 12:48:36.673681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.095 [2024-12-14 12:48:36.673698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.095 [2024-12-14 12:48:36.673707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:37.095 [2024-12-14 12:48:36.673716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.095 [2024-12-14 12:48:36.673725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.095 [2024-12-14 12:48:36.757024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.095 [2024-12-14 12:48:36.757101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:37.095 [2024-12-14 12:48:36.757114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.095 [2024-12-14 12:48:36.757123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.095 [2024-12-14 12:48:36.825661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.095 [2024-12-14 12:48:36.825716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:37.095 [2024-12-14 12:48:36.825728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.095 [2024-12-14 12:48:36.825744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.095 [2024-12-14 12:48:36.825819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.095 [2024-12-14 12:48:36.825830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:37.095 [2024-12-14 12:48:36.825838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.095 [2024-12-14 12:48:36.825847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.095 [2024-12-14 12:48:36.825886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.095 [2024-12-14 12:48:36.825896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:37.095 [2024-12-14 12:48:36.825905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.095 [2024-12-14 12:48:36.825913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.095 [2024-12-14 12:48:36.826010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.095 [2024-12-14 12:48:36.826024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:37.095 [2024-12-14 12:48:36.826033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.095 [2024-12-14 12:48:36.826041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.095 [2024-12-14 12:48:36.826100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.095 [2024-12-14 12:48:36.826111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:37.095 [2024-12-14 12:48:36.826121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.095 [2024-12-14 12:48:36.826130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.095 [2024-12-14 12:48:36.826170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.095 [2024-12-14 12:48:36.826183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:37.095 [2024-12-14 12:48:36.826192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.095 [2024-12-14 12:48:36.826199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.095 [2024-12-14 12:48:36.826245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:37.095 [2024-12-14 12:48:36.826256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:37.095 [2024-12-14 12:48:36.826266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:37.095 [2024-12-14 12:48:36.826276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.095 [2024-12-14 12:48:36.826408] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 374.599 ms, result 0 00:22:38.037 00:22:38.037 00:22:38.037 12:48:37 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:38.037 [2024-12-14 12:48:37.736896] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:22:38.037 [2024-12-14 12:48:37.737045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79898 ] 00:22:38.298 [2024-12-14 12:48:37.900994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.298 [2024-12-14 12:48:38.016298] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.872 [2024-12-14 12:48:38.308485] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:38.872 [2024-12-14 12:48:38.308571] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:38.872 [2024-12-14 12:48:38.469801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.872 [2024-12-14 12:48:38.469866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:38.872 [2024-12-14 12:48:38.469882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:38.872 [2024-12-14 12:48:38.469890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.872 [2024-12-14 12:48:38.469950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.872 [2024-12-14 12:48:38.469964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:38.872 [2024-12-14 12:48:38.469973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:38.872 [2024-12-14 12:48:38.469981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.872 [2024-12-14 12:48:38.470002] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:38.872 [2024-12-14 12:48:38.470769] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:38.872 [2024-12-14 12:48:38.470799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.872 [2024-12-14 12:48:38.470808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:38.872 [2024-12-14 12:48:38.470817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:22:38.872 [2024-12-14 12:48:38.470826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.872 [2024-12-14 12:48:38.472514] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:38.872 [2024-12-14 12:48:38.486474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.872 [2024-12-14 12:48:38.486525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:38.872 [2024-12-14 12:48:38.486539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.962 ms 00:22:38.872 [2024-12-14 12:48:38.486548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.872 [2024-12-14 12:48:38.486628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.872 [2024-12-14 12:48:38.486639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:38.872 [2024-12-14 12:48:38.486649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:38.872 [2024-12-14 12:48:38.486656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.872 [2024-12-14 12:48:38.494550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.872 [2024-12-14 12:48:38.494596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:38.872 [2024-12-14 12:48:38.494607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.817 ms 00:22:38.872 [2024-12-14 12:48:38.494621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.872 [2024-12-14 12:48:38.494697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.872 [2024-12-14 12:48:38.494707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:38.872 [2024-12-14 12:48:38.494716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:38.872 [2024-12-14 12:48:38.494725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.872 [2024-12-14 12:48:38.494766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.872 [2024-12-14 12:48:38.494778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:38.872 [2024-12-14 12:48:38.494787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:38.872 [2024-12-14 12:48:38.494796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.872 [2024-12-14 12:48:38.494823] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:38.872 [2024-12-14 12:48:38.499016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.872 [2024-12-14 12:48:38.499065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:38.872 [2024-12-14 12:48:38.499079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.199 ms 00:22:38.872 [2024-12-14 12:48:38.499088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.872 [2024-12-14 12:48:38.499127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.872 [2024-12-14 12:48:38.499137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:38.873 [2024-12-14 12:48:38.499146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:38.873 [2024-12-14 12:48:38.499154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.873 [2024-12-14 12:48:38.499206] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:38.873 [2024-12-14 12:48:38.499232] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:38.873 [2024-12-14 12:48:38.499270] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:38.873 [2024-12-14 12:48:38.499290] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:38.873 [2024-12-14 12:48:38.499396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:38.873 [2024-12-14 12:48:38.499409] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:38.873 [2024-12-14 12:48:38.499420] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:38.873 [2024-12-14 12:48:38.499431] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:38.873 [2024-12-14 12:48:38.499442] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:38.873 [2024-12-14 12:48:38.499454] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:38.873 [2024-12-14 12:48:38.499462] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:38.873 [2024-12-14 12:48:38.499469] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:38.873 [2024-12-14 12:48:38.499481] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:38.873 [2024-12-14 12:48:38.499490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.873 [2024-12-14 12:48:38.499499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:38.873 [2024-12-14 12:48:38.499508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:22:38.873 [2024-12-14 12:48:38.499516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.873 [2024-12-14 12:48:38.499599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.873 [2024-12-14 12:48:38.499611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:38.873 [2024-12-14 12:48:38.499619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:38.873 [2024-12-14 12:48:38.499628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.873 [2024-12-14 12:48:38.499728] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:38.873 [2024-12-14 12:48:38.499750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:38.873 [2024-12-14 12:48:38.499760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:38.873 [2024-12-14 12:48:38.499769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.873 [2024-12-14 12:48:38.499777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:38.873 [2024-12-14 12:48:38.499785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:38.873 [2024-12-14 12:48:38.499792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:38.873 [2024-12-14 12:48:38.499800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:38.873 [2024-12-14 12:48:38.499808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:38.873 [2024-12-14 12:48:38.499814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:38.873 [2024-12-14 12:48:38.499823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:38.873 [2024-12-14 12:48:38.499831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:38.873 [2024-12-14 12:48:38.499838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:38.873 [2024-12-14 12:48:38.499853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:38.873 [2024-12-14 12:48:38.499862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:38.873 [2024-12-14 12:48:38.499869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.873 [2024-12-14 12:48:38.499877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:38.873 [2024-12-14 12:48:38.499885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:38.873 [2024-12-14 12:48:38.499891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.873 [2024-12-14 12:48:38.499898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:38.873 [2024-12-14 12:48:38.499905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:38.873 [2024-12-14 12:48:38.499912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.873 [2024-12-14 12:48:38.499919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:38.873 [2024-12-14 12:48:38.499927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:38.873 [2024-12-14 12:48:38.499934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.873 [2024-12-14 12:48:38.499941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:38.873 [2024-12-14 12:48:38.499951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:38.873 [2024-12-14 12:48:38.499959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.873 [2024-12-14 12:48:38.499966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:38.873 [2024-12-14 12:48:38.499973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:38.873 [2024-12-14 12:48:38.499980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:38.873 [2024-12-14 12:48:38.499986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:38.873 [2024-12-14 12:48:38.499993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:38.873 [2024-12-14 12:48:38.499999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:38.873 [2024-12-14 12:48:38.500005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:38.873 [2024-12-14 12:48:38.500013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:38.873 [2024-12-14 12:48:38.500021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:38.873 [2024-12-14 12:48:38.500027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:38.873 [2024-12-14 12:48:38.500034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:38.873 [2024-12-14 12:48:38.500040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.873 [2024-12-14 12:48:38.500046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:38.873 [2024-12-14 12:48:38.500053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:38.873 [2024-12-14 12:48:38.500074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.873 [2024-12-14 12:48:38.500081] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:38.873 [2024-12-14 12:48:38.500089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:38.873 [2024-12-14 12:48:38.500097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:38.873 [2024-12-14 12:48:38.500106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:38.873 [2024-12-14 12:48:38.500114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:38.873 [2024-12-14 12:48:38.500121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:38.873 [2024-12-14 12:48:38.500128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:38.873 [2024-12-14 12:48:38.500134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:38.873 [2024-12-14 12:48:38.500141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:38.873 [2024-12-14 12:48:38.500149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:38.873 [2024-12-14 12:48:38.500159] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:38.873 [2024-12-14 12:48:38.500168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:38.873 [2024-12-14 12:48:38.500179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:38.873 [2024-12-14 12:48:38.500186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:38.873 [2024-12-14 12:48:38.500194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:38.873 [2024-12-14 12:48:38.500205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:38.873 [2024-12-14 12:48:38.500213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:38.873 [2024-12-14 12:48:38.500220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:38.873 [2024-12-14 12:48:38.500228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:38.873 [2024-12-14 12:48:38.500235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:38.873 [2024-12-14 12:48:38.500242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:38.873 [2024-12-14 12:48:38.500251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:38.873 [2024-12-14 12:48:38.500260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:38.873 [2024-12-14 12:48:38.500268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:38.873 [2024-12-14 12:48:38.500275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:38.874 [2024-12-14 12:48:38.500283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:38.874 [2024-12-14 12:48:38.500290] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:38.874 [2024-12-14 12:48:38.500305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:38.874 [2024-12-14 12:48:38.500315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:38.874 [2024-12-14 12:48:38.500323] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:38.874 [2024-12-14 12:48:38.500331] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:38.874 [2024-12-14 12:48:38.500338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:38.874 [2024-12-14 12:48:38.500346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.874 [2024-12-14 12:48:38.500354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:38.874 [2024-12-14 12:48:38.500362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:22:38.874 [2024-12-14 12:48:38.500371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.874 [2024-12-14 12:48:38.531751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.874 [2024-12-14 12:48:38.531804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:38.874 [2024-12-14 12:48:38.531817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.332 ms 00:22:38.874 [2024-12-14 12:48:38.531830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.874 [2024-12-14 12:48:38.531923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.874 [2024-12-14 12:48:38.531932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:38.874 [2024-12-14 12:48:38.531941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:38.874 [2024-12-14 12:48:38.531948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.874 [2024-12-14 12:48:38.574795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.874 [2024-12-14 12:48:38.574848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:38.874 [2024-12-14 12:48:38.574862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.787 ms 00:22:38.874 [2024-12-14 12:48:38.574871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.874 [2024-12-14 12:48:38.574921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.874 [2024-12-14 12:48:38.574932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:38.874 [2024-12-14 12:48:38.574946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:38.874 [2024-12-14 12:48:38.574954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.874 [2024-12-14 12:48:38.575572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.874 [2024-12-14 12:48:38.575609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:38.874 [2024-12-14 12:48:38.575620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:22:38.874 [2024-12-14 12:48:38.575628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.874 [2024-12-14 12:48:38.575792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.874 [2024-12-14 12:48:38.575804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:38.874 [2024-12-14 12:48:38.575817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:22:38.874 [2024-12-14 12:48:38.575826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.874 [2024-12-14 12:48:38.591415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.874 [2024-12-14 12:48:38.591462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:38.874 [2024-12-14 12:48:38.591473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.569 ms 00:22:38.874 [2024-12-14 12:48:38.591481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.874 [2024-12-14 12:48:38.605475] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:38.874 [2024-12-14 12:48:38.605524] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:38.874 [2024-12-14 12:48:38.605548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.874 [2024-12-14 12:48:38.605558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:38.874 [2024-12-14 12:48:38.605568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.958 ms 00:22:38.874 [2024-12-14 12:48:38.605576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.136 [2024-12-14 12:48:38.630987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.136 [2024-12-14 12:48:38.631036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:39.136 [2024-12-14 12:48:38.631048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.358 ms 00:22:39.136 [2024-12-14 12:48:38.631065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.136 [2024-12-14 12:48:38.643784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.136 [2024-12-14 12:48:38.643828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:39.136 [2024-12-14 12:48:38.643840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.657 ms 00:22:39.136 [2024-12-14 12:48:38.643848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.136 [2024-12-14 12:48:38.656482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.136 [2024-12-14 12:48:38.656528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:39.136 [2024-12-14 12:48:38.656540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.588 ms 00:22:39.136 [2024-12-14 12:48:38.656548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.136 [2024-12-14 12:48:38.657214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.136 [2024-12-14 12:48:38.657247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:39.136 [2024-12-14 12:48:38.657261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:22:39.136 [2024-12-14 12:48:38.657270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.136 [2024-12-14 12:48:38.721052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.136 [2024-12-14 12:48:38.721129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:39.136 [2024-12-14 12:48:38.721151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.762 ms 00:22:39.136 [2024-12-14 12:48:38.721160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.136 [2024-12-14 12:48:38.732093] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:39.136 [2024-12-14 12:48:38.734855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.136 [2024-12-14 12:48:38.734899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:39.136 [2024-12-14 12:48:38.734911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.640 ms 00:22:39.136 [2024-12-14 12:48:38.734919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.136 [2024-12-14 12:48:38.735003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.136 [2024-12-14 12:48:38.735014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:39.136 [2024-12-14 12:48:38.735024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:39.136 [2024-12-14 12:48:38.735037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.136 [2024-12-14 12:48:38.735127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.136 [2024-12-14 12:48:38.735142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:39.136 [2024-12-14 12:48:38.735151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:39.136 [2024-12-14 12:48:38.735160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.136 [2024-12-14 12:48:38.735183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.136 [2024-12-14 12:48:38.735191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:39.136 [2024-12-14 12:48:38.735200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:39.136 [2024-12-14 12:48:38.735208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.136 [2024-12-14 12:48:38.735244] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:39.136 [2024-12-14 12:48:38.735255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.136 [2024-12-14 12:48:38.735264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:39.136 [2024-12-14 12:48:38.735273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:39.136 [2024-12-14 12:48:38.735281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.136 [2024-12-14 12:48:38.760788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.136 [2024-12-14 12:48:38.760838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:39.136 [2024-12-14 12:48:38.760856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.484 ms 00:22:39.136 [2024-12-14 12:48:38.760864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.136 [2024-12-14 12:48:38.760944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:39.136 [2024-12-14 12:48:38.760954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:39.136 [2024-12-14 12:48:38.760964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:39.136 [2024-12-14 12:48:38.760973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:39.136 [2024-12-14 12:48:38.762242] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 291.923 ms, result 0 00:22:40.602  [2024-12-14T12:48:41.282Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-14T12:48:42.226Z] Copying: 25/1024 [MB] (10 MBps) [2024-12-14T12:48:43.168Z] Copying: 36/1024 [MB] (10 MBps) [2024-12-14T12:48:44.113Z] Copying: 47/1024 [MB] (10 MBps) [2024-12-14T12:48:45.058Z] Copying: 58/1024 [MB] (10 MBps) [2024-12-14T12:48:46.000Z] Copying: 68/1024 [MB] (10 MBps) [2024-12-14T12:48:47.388Z] Copying: 79/1024 [MB] (10 MBps) [2024-12-14T12:48:47.960Z] Copying: 90/1024 [MB] (10 MBps) [2024-12-14T12:48:49.352Z] Copying: 108/1024 [MB] (18 MBps) [2024-12-14T12:48:50.297Z] Copying: 127/1024 [MB] (19 MBps) [2024-12-14T12:48:51.242Z] Copying: 144/1024 [MB] (16 MBps) [2024-12-14T12:48:52.185Z] Copying: 167/1024 [MB] (23 MBps) [2024-12-14T12:48:53.129Z] Copying: 185/1024 [MB] (17 MBps) [2024-12-14T12:48:54.074Z] Copying: 196/1024 [MB] (10 MBps) [2024-12-14T12:48:55.019Z] Copying: 208/1024 [MB] (12 MBps) [2024-12-14T12:48:55.968Z] Copying: 225/1024 [MB] (16 MBps) [2024-12-14T12:48:57.357Z] Copying: 235/1024 [MB] (10 MBps) [2024-12-14T12:48:58.303Z] Copying: 246/1024 [MB] (10 MBps) [2024-12-14T12:48:59.247Z] Copying: 256/1024 [MB] (10 MBps) [2024-12-14T12:49:00.192Z] Copying: 270/1024 [MB] (14 MBps) [2024-12-14T12:49:01.137Z] Copying: 282/1024 [MB] (11 MBps) [2024-12-14T12:49:02.081Z] Copying: 294/1024 [MB] (12 MBps) [2024-12-14T12:49:03.025Z] Copying: 305/1024 [MB] (10 MBps) [2024-12-14T12:49:03.970Z] Copying: 317/1024 [MB] (12 MBps) [2024-12-14T12:49:05.358Z] Copying: 329/1024 [MB] (12 MBps) [2024-12-14T12:49:06.302Z] Copying: 342/1024 [MB] (12 MBps) [2024-12-14T12:49:07.246Z] Copying: 363/1024 [MB] (21 MBps) [2024-12-14T12:49:08.190Z] Copying: 381/1024 [MB] (17 MBps) [2024-12-14T12:49:09.191Z] Copying: 400/1024 [MB] (19 MBps) [2024-12-14T12:49:10.131Z] Copying: 425/1024 [MB] (25 MBps) [2024-12-14T12:49:11.076Z] Copying: 457/1024 [MB] (32 MBps) [2024-12-14T12:49:12.020Z] Copying: 476/1024 [MB] (18 MBps) [2024-12-14T12:49:12.961Z] Copying: 494/1024 [MB] (18 MBps) [2024-12-14T12:49:14.350Z] Copying: 511/1024 [MB] (16 MBps) [2024-12-14T12:49:15.297Z] Copying: 532/1024 [MB] (20 MBps) [2024-12-14T12:49:16.242Z] Copying: 555/1024 [MB] (23 MBps) [2024-12-14T12:49:17.187Z] Copying: 570/1024 [MB] (14 MBps) [2024-12-14T12:49:18.131Z] Copying: 586/1024 [MB] (15 MBps) [2024-12-14T12:49:19.076Z] Copying: 606/1024 [MB] (20 MBps) [2024-12-14T12:49:20.021Z] Copying: 624/1024 [MB] (18 MBps) [2024-12-14T12:49:20.964Z] Copying: 638/1024 [MB] (14 MBps) [2024-12-14T12:49:22.353Z] Copying: 657/1024 [MB] (18 MBps) [2024-12-14T12:49:23.299Z] Copying: 671/1024 [MB] (14 MBps) [2024-12-14T12:49:24.242Z] Copying: 689/1024 [MB] (17 MBps) [2024-12-14T12:49:25.188Z] Copying: 709/1024 [MB] (19 MBps) [2024-12-14T12:49:26.131Z] Copying: 726/1024 [MB] (16 MBps) [2024-12-14T12:49:27.077Z] Copying: 736/1024 [MB] (10 MBps) [2024-12-14T12:49:28.022Z] Copying: 747/1024 [MB] (10 MBps) [2024-12-14T12:49:28.965Z] Copying: 757/1024 [MB] (10 MBps) [2024-12-14T12:49:30.353Z] Copying: 768/1024 [MB] (10 MBps) [2024-12-14T12:49:31.298Z] Copying: 778/1024 [MB] (10 MBps) [2024-12-14T12:49:32.244Z] Copying: 789/1024 [MB] (10 MBps) [2024-12-14T12:49:33.189Z] Copying: 799/1024 [MB] (10 MBps) [2024-12-14T12:49:34.134Z] Copying: 810/1024 [MB] (10 MBps) [2024-12-14T12:49:35.078Z] Copying: 831/1024 [MB] (20 MBps) [2024-12-14T12:49:36.022Z] Copying: 856/1024 [MB] (25 MBps) [2024-12-14T12:49:36.972Z] Copying: 876/1024 [MB] (19 MBps) [2024-12-14T12:49:37.979Z] Copying: 890/1024 [MB] (14 MBps) [2024-12-14T12:49:39.368Z] Copying: 904/1024 [MB] (13 MBps) [2024-12-14T12:49:40.312Z] Copying: 922/1024 [MB] (18 MBps) [2024-12-14T12:49:41.255Z] Copying: 938/1024 [MB] (15 MBps) [2024-12-14T12:49:42.199Z] Copying: 959/1024 [MB] (21 MBps) [2024-12-14T12:49:43.140Z] Copying: 973/1024 [MB] (14 MBps) [2024-12-14T12:49:44.084Z] Copying: 992/1024 [MB] (19 MBps) [2024-12-14T12:49:45.027Z] Copying: 1003/1024 [MB] (10 MBps) [2024-12-14T12:49:45.288Z] Copying: 1019/1024 [MB] (16 MBps) [2024-12-14T12:49:45.288Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-14 12:49:45.162333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.551 [2024-12-14 12:49:45.162411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:45.551 [2024-12-14 12:49:45.162430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:45.551 [2024-12-14 12:49:45.162440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.551 [2024-12-14 12:49:45.162466] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:45.551 [2024-12-14 12:49:45.165863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.551 [2024-12-14 12:49:45.165919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:45.551 [2024-12-14 12:49:45.165932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.380 ms 00:23:45.551 [2024-12-14 12:49:45.165940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.551 [2024-12-14 12:49:45.166186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.551 [2024-12-14 12:49:45.166200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:45.551 [2024-12-14 12:49:45.166210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:23:45.551 [2024-12-14 12:49:45.166219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.551 [2024-12-14 12:49:45.169680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.551 [2024-12-14 12:49:45.169705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:45.551 [2024-12-14 12:49:45.169714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.447 ms 00:23:45.551 [2024-12-14 12:49:45.169729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.551 [2024-12-14 12:49:45.175844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.551 [2024-12-14 12:49:45.175887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:45.551 [2024-12-14 12:49:45.175899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.099 ms 00:23:45.551 [2024-12-14 12:49:45.175909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.551 [2024-12-14 12:49:45.205537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.551 [2024-12-14 12:49:45.205614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:45.551 [2024-12-14 12:49:45.205632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.563 ms 00:23:45.551 [2024-12-14 12:49:45.205640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.551 [2024-12-14 12:49:45.222784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.551 [2024-12-14 12:49:45.222839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:45.551 [2024-12-14 12:49:45.222853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.035 ms 00:23:45.551 [2024-12-14 12:49:45.222863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.551 [2024-12-14 12:49:45.223036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.551 [2024-12-14 12:49:45.223051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:45.551 [2024-12-14 12:49:45.223089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:23:45.551 [2024-12-14 12:49:45.223098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.551 [2024-12-14 12:49:45.250332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.551 [2024-12-14 12:49:45.250382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:45.551 [2024-12-14 12:49:45.250395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.216 ms 00:23:45.551 [2024-12-14 12:49:45.250402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.551 [2024-12-14 12:49:45.276695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.551 [2024-12-14 12:49:45.276744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:45.551 [2024-12-14 12:49:45.276757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.241 ms 00:23:45.551 [2024-12-14 12:49:45.276765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.812 [2024-12-14 12:49:45.302120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.812 [2024-12-14 12:49:45.302168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:45.812 [2024-12-14 12:49:45.302180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.305 ms 00:23:45.812 [2024-12-14 12:49:45.302188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.812 [2024-12-14 12:49:45.327828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.812 [2024-12-14 12:49:45.327877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:45.812 [2024-12-14 12:49:45.327890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.544 ms 00:23:45.812 [2024-12-14 12:49:45.327898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.812 [2024-12-14 12:49:45.327947] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:45.812 [2024-12-14 12:49:45.327974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.327988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.327997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:45.812 [2024-12-14 12:49:45.328178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:45.813 [2024-12-14 12:49:45.328852] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:45.813 [2024-12-14 12:49:45.328860] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c4ed0d54-cf59-4e49-a35e-1ffe69f88ff2 00:23:45.813 [2024-12-14 12:49:45.328868] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:45.813 [2024-12-14 12:49:45.328876] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:45.813 [2024-12-14 12:49:45.328883] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:45.813 [2024-12-14 12:49:45.328892] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:45.813 [2024-12-14 12:49:45.328907] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:45.813 [2024-12-14 12:49:45.328917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:45.813 [2024-12-14 12:49:45.328925] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:45.813 [2024-12-14 12:49:45.328931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:45.813 [2024-12-14 12:49:45.328939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:45.813 [2024-12-14 12:49:45.328947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.813 [2024-12-14 12:49:45.328957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:45.813 [2024-12-14 12:49:45.328967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:23:45.814 [2024-12-14 12:49:45.328980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.343051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.814 [2024-12-14 12:49:45.343107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:45.814 [2024-12-14 12:49:45.343119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.050 ms 00:23:45.814 [2024-12-14 12:49:45.343128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.343540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.814 [2024-12-14 12:49:45.343560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:45.814 [2024-12-14 12:49:45.343578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:23:45.814 [2024-12-14 12:49:45.343586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.380440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.814 [2024-12-14 12:49:45.380495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:45.814 [2024-12-14 12:49:45.380508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.814 [2024-12-14 12:49:45.380518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.380597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.814 [2024-12-14 12:49:45.380608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:45.814 [2024-12-14 12:49:45.380623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.814 [2024-12-14 12:49:45.380633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.380730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.814 [2024-12-14 12:49:45.380743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:45.814 [2024-12-14 12:49:45.380755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.814 [2024-12-14 12:49:45.380765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.380784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.814 [2024-12-14 12:49:45.380793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:45.814 [2024-12-14 12:49:45.380804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.814 [2024-12-14 12:49:45.380817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.467525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.814 [2024-12-14 12:49:45.467583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:45.814 [2024-12-14 12:49:45.467602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.814 [2024-12-14 12:49:45.467610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.538302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.814 [2024-12-14 12:49:45.538361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:45.814 [2024-12-14 12:49:45.538380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.814 [2024-12-14 12:49:45.538389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.538455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.814 [2024-12-14 12:49:45.538465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:45.814 [2024-12-14 12:49:45.538475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.814 [2024-12-14 12:49:45.538483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.538542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.814 [2024-12-14 12:49:45.538554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:45.814 [2024-12-14 12:49:45.538564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.814 [2024-12-14 12:49:45.538573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.538676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.814 [2024-12-14 12:49:45.538688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:45.814 [2024-12-14 12:49:45.538697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.814 [2024-12-14 12:49:45.538706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.538738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.814 [2024-12-14 12:49:45.538749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:45.814 [2024-12-14 12:49:45.538757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.814 [2024-12-14 12:49:45.538766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.538814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.814 [2024-12-14 12:49:45.538826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:45.814 [2024-12-14 12:49:45.538835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.814 [2024-12-14 12:49:45.538843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.538896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.814 [2024-12-14 12:49:45.538915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:45.814 [2024-12-14 12:49:45.538927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.814 [2024-12-14 12:49:45.538936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.814 [2024-12-14 12:49:45.539109] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.720 ms, result 0 00:23:46.758 00:23:46.758 00:23:46.758 12:49:46 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:49.307 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:49.307 12:49:48 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:49.307 [2024-12-14 12:49:48.681119] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:49.308 [2024-12-14 12:49:48.681259] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80622 ] 00:23:49.308 [2024-12-14 12:49:48.846296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.308 [2024-12-14 12:49:48.963176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.569 [2024-12-14 12:49:49.258504] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:49.569 [2024-12-14 12:49:49.258590] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:49.831 [2024-12-14 12:49:49.417369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.831 [2024-12-14 12:49:49.417415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:49.831 [2024-12-14 12:49:49.417428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:49.831 [2024-12-14 12:49:49.417436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.831 [2024-12-14 12:49:49.417481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.831 [2024-12-14 12:49:49.417493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:49.831 [2024-12-14 12:49:49.417501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:49.831 [2024-12-14 12:49:49.417526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.831 [2024-12-14 12:49:49.417543] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:49.831 [2024-12-14 12:49:49.418206] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:49.831 [2024-12-14 12:49:49.418231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.831 [2024-12-14 12:49:49.418239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:49.831 [2024-12-14 12:49:49.418249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:23:49.831 [2024-12-14 12:49:49.418256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.831 [2024-12-14 12:49:49.419319] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:49.831 [2024-12-14 12:49:49.431726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.831 [2024-12-14 12:49:49.431769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:49.831 [2024-12-14 12:49:49.431781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.409 ms 00:23:49.831 [2024-12-14 12:49:49.431788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.831 [2024-12-14 12:49:49.431844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.831 [2024-12-14 12:49:49.431853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:49.831 [2024-12-14 12:49:49.431862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:49.831 [2024-12-14 12:49:49.431868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.831 [2024-12-14 12:49:49.436707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.831 [2024-12-14 12:49:49.436734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:49.831 [2024-12-14 12:49:49.436743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.791 ms 00:23:49.831 [2024-12-14 12:49:49.436754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.831 [2024-12-14 12:49:49.436819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.831 [2024-12-14 12:49:49.436827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:49.831 [2024-12-14 12:49:49.436835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:49.831 [2024-12-14 12:49:49.436842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.831 [2024-12-14 12:49:49.436890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.832 [2024-12-14 12:49:49.436901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:49.832 [2024-12-14 12:49:49.436908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:49.832 [2024-12-14 12:49:49.436916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.832 [2024-12-14 12:49:49.436938] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:49.832 [2024-12-14 12:49:49.440406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.832 [2024-12-14 12:49:49.440432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:49.832 [2024-12-14 12:49:49.440444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.472 ms 00:23:49.832 [2024-12-14 12:49:49.440451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.832 [2024-12-14 12:49:49.440480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.832 [2024-12-14 12:49:49.440488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:49.832 [2024-12-14 12:49:49.440495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:49.832 [2024-12-14 12:49:49.440502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.832 [2024-12-14 12:49:49.440521] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:49.832 [2024-12-14 12:49:49.440540] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:49.832 [2024-12-14 12:49:49.440573] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:49.832 [2024-12-14 12:49:49.440590] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:49.832 [2024-12-14 12:49:49.440691] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:49.832 [2024-12-14 12:49:49.440702] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:49.832 [2024-12-14 12:49:49.440712] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:49.832 [2024-12-14 12:49:49.440722] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:49.832 [2024-12-14 12:49:49.440730] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:49.832 [2024-12-14 12:49:49.440737] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:49.832 [2024-12-14 12:49:49.440744] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:49.832 [2024-12-14 12:49:49.440751] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:49.832 [2024-12-14 12:49:49.440762] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:49.832 [2024-12-14 12:49:49.440770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.832 [2024-12-14 12:49:49.440777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:49.832 [2024-12-14 12:49:49.440785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:23:49.832 [2024-12-14 12:49:49.440792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.832 [2024-12-14 12:49:49.440873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.832 [2024-12-14 12:49:49.440882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:49.832 [2024-12-14 12:49:49.440890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:49.832 [2024-12-14 12:49:49.440896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.832 [2024-12-14 12:49:49.441003] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:49.832 [2024-12-14 12:49:49.441020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:49.832 [2024-12-14 12:49:49.441028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:49.832 [2024-12-14 12:49:49.441036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.832 [2024-12-14 12:49:49.441044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:49.832 [2024-12-14 12:49:49.441050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:49.832 [2024-12-14 12:49:49.441069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:49.832 [2024-12-14 12:49:49.441076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:49.832 [2024-12-14 12:49:49.441083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:49.832 [2024-12-14 12:49:49.441091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:49.832 [2024-12-14 12:49:49.441098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:49.832 [2024-12-14 12:49:49.441104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:49.832 [2024-12-14 12:49:49.441111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:49.832 [2024-12-14 12:49:49.441125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:49.832 [2024-12-14 12:49:49.441134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:49.832 [2024-12-14 12:49:49.441142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.832 [2024-12-14 12:49:49.441149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:49.832 [2024-12-14 12:49:49.441155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:49.832 [2024-12-14 12:49:49.441161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.832 [2024-12-14 12:49:49.441168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:49.832 [2024-12-14 12:49:49.441175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:49.832 [2024-12-14 12:49:49.441181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:49.832 [2024-12-14 12:49:49.441188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:49.832 [2024-12-14 12:49:49.441195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:49.832 [2024-12-14 12:49:49.441202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:49.832 [2024-12-14 12:49:49.441208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:49.832 [2024-12-14 12:49:49.441215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:49.832 [2024-12-14 12:49:49.441222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:49.832 [2024-12-14 12:49:49.441229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:49.832 [2024-12-14 12:49:49.441235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:49.832 [2024-12-14 12:49:49.441242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:49.832 [2024-12-14 12:49:49.441248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:49.832 [2024-12-14 12:49:49.441255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:49.832 [2024-12-14 12:49:49.441261] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:49.832 [2024-12-14 12:49:49.441268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:49.832 [2024-12-14 12:49:49.441275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:49.832 [2024-12-14 12:49:49.441281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:49.832 [2024-12-14 12:49:49.441288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:49.832 [2024-12-14 12:49:49.441294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:49.832 [2024-12-14 12:49:49.441300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.832 [2024-12-14 12:49:49.441307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:49.832 [2024-12-14 12:49:49.441314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:49.832 [2024-12-14 12:49:49.441320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.832 [2024-12-14 12:49:49.441326] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:49.832 [2024-12-14 12:49:49.441333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:49.832 [2024-12-14 12:49:49.441340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:49.832 [2024-12-14 12:49:49.441347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.832 [2024-12-14 12:49:49.441355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:49.832 [2024-12-14 12:49:49.441362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:49.832 [2024-12-14 12:49:49.441368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:49.832 [2024-12-14 12:49:49.441374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:49.832 [2024-12-14 12:49:49.441380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:49.832 [2024-12-14 12:49:49.441387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:49.832 [2024-12-14 12:49:49.441395] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:49.832 [2024-12-14 12:49:49.441404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:49.832 [2024-12-14 12:49:49.441414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:49.832 [2024-12-14 12:49:49.441421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:49.832 [2024-12-14 12:49:49.441428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:49.832 [2024-12-14 12:49:49.441436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:49.832 [2024-12-14 12:49:49.441444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:49.832 [2024-12-14 12:49:49.441451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:49.832 [2024-12-14 12:49:49.441458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:49.832 [2024-12-14 12:49:49.441464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:49.832 [2024-12-14 12:49:49.441471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:49.832 [2024-12-14 12:49:49.441479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:49.832 [2024-12-14 12:49:49.441486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:49.832 [2024-12-14 12:49:49.441493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:49.833 [2024-12-14 12:49:49.441500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:49.833 [2024-12-14 12:49:49.441526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:49.833 [2024-12-14 12:49:49.441533] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:49.833 [2024-12-14 12:49:49.441541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:49.833 [2024-12-14 12:49:49.441549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:49.833 [2024-12-14 12:49:49.441556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:49.833 [2024-12-14 12:49:49.441564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:49.833 [2024-12-14 12:49:49.441571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:49.833 [2024-12-14 12:49:49.441578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.833 [2024-12-14 12:49:49.441585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:49.833 [2024-12-14 12:49:49.441593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:23:49.833 [2024-12-14 12:49:49.441600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.833 [2024-12-14 12:49:49.467452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.833 [2024-12-14 12:49:49.467488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:49.833 [2024-12-14 12:49:49.467499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.813 ms 00:23:49.833 [2024-12-14 12:49:49.467509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.833 [2024-12-14 12:49:49.467588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.833 [2024-12-14 12:49:49.467597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:49.833 [2024-12-14 12:49:49.467604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:49.833 [2024-12-14 12:49:49.467611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.833 [2024-12-14 12:49:49.507846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.833 [2024-12-14 12:49:49.507887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:49.833 [2024-12-14 12:49:49.507899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.186 ms 00:23:49.833 [2024-12-14 12:49:49.507907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.833 [2024-12-14 12:49:49.507946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.833 [2024-12-14 12:49:49.507956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:49.833 [2024-12-14 12:49:49.507967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:49.833 [2024-12-14 12:49:49.507975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.833 [2024-12-14 12:49:49.508348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.833 [2024-12-14 12:49:49.508372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:49.833 [2024-12-14 12:49:49.508382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:23:49.833 [2024-12-14 12:49:49.508389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.833 [2024-12-14 12:49:49.508513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.833 [2024-12-14 12:49:49.508523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:49.833 [2024-12-14 12:49:49.508532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:23:49.833 [2024-12-14 12:49:49.508542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.833 [2024-12-14 12:49:49.521747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.833 [2024-12-14 12:49:49.521779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:49.833 [2024-12-14 12:49:49.521791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.187 ms 00:23:49.833 [2024-12-14 12:49:49.521798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.833 [2024-12-14 12:49:49.534793] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:49.833 [2024-12-14 12:49:49.534840] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:49.833 [2024-12-14 12:49:49.534851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.833 [2024-12-14 12:49:49.534860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:49.833 [2024-12-14 12:49:49.534869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.950 ms 00:23:49.833 [2024-12-14 12:49:49.534876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.833 [2024-12-14 12:49:49.559132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.833 [2024-12-14 12:49:49.559168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:49.833 [2024-12-14 12:49:49.559178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.217 ms 00:23:49.833 [2024-12-14 12:49:49.559186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.094 [2024-12-14 12:49:49.571099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.094 [2024-12-14 12:49:49.571132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:50.094 [2024-12-14 12:49:49.571142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.862 ms 00:23:50.094 [2024-12-14 12:49:49.571150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.094 [2024-12-14 12:49:49.582708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.095 [2024-12-14 12:49:49.582743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:50.095 [2024-12-14 12:49:49.582753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.525 ms 00:23:50.095 [2024-12-14 12:49:49.582759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.095 [2024-12-14 12:49:49.583367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.095 [2024-12-14 12:49:49.583393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:50.095 [2024-12-14 12:49:49.583405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:23:50.095 [2024-12-14 12:49:49.583413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.095 [2024-12-14 12:49:49.640686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.095 [2024-12-14 12:49:49.640733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:50.095 [2024-12-14 12:49:49.640751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.255 ms 00:23:50.095 [2024-12-14 12:49:49.640759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.095 [2024-12-14 12:49:49.651220] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:50.095 [2024-12-14 12:49:49.653458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.095 [2024-12-14 12:49:49.653490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:50.095 [2024-12-14 12:49:49.653502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.658 ms 00:23:50.095 [2024-12-14 12:49:49.653520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.095 [2024-12-14 12:49:49.653604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.095 [2024-12-14 12:49:49.653617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:50.095 [2024-12-14 12:49:49.653628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:50.095 [2024-12-14 12:49:49.653639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.095 [2024-12-14 12:49:49.653703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.095 [2024-12-14 12:49:49.653714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:50.095 [2024-12-14 12:49:49.653722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:50.095 [2024-12-14 12:49:49.653729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.095 [2024-12-14 12:49:49.653747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.095 [2024-12-14 12:49:49.653756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:50.095 [2024-12-14 12:49:49.653764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:50.095 [2024-12-14 12:49:49.653771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.095 [2024-12-14 12:49:49.653805] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:50.095 [2024-12-14 12:49:49.653816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.095 [2024-12-14 12:49:49.653824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:50.095 [2024-12-14 12:49:49.653832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:50.095 [2024-12-14 12:49:49.653840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.095 [2024-12-14 12:49:49.677418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.095 [2024-12-14 12:49:49.677458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:50.095 [2024-12-14 12:49:49.677473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.561 ms 00:23:50.095 [2024-12-14 12:49:49.677481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.095 [2024-12-14 12:49:49.677558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.095 [2024-12-14 12:49:49.677568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:50.095 [2024-12-14 12:49:49.677577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:50.095 [2024-12-14 12:49:49.677585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.095 [2024-12-14 12:49:49.678576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.774 ms, result 0 00:23:51.037  [2024-12-14T12:49:51.719Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-14T12:49:53.116Z] Copying: 21/1024 [MB] (10 MBps) [2024-12-14T12:49:54.062Z] Copying: 32/1024 [MB] (10 MBps) [2024-12-14T12:49:55.008Z] Copying: 42/1024 [MB] (10 MBps) [2024-12-14T12:49:55.952Z] Copying: 52/1024 [MB] (10 MBps) [2024-12-14T12:49:56.895Z] Copying: 64024/1048576 [kB] (9984 kBps) [2024-12-14T12:49:57.838Z] Copying: 72/1024 [MB] (10 MBps) [2024-12-14T12:49:58.783Z] Copying: 83/1024 [MB] (10 MBps) [2024-12-14T12:49:59.727Z] Copying: 98/1024 [MB] (15 MBps) [2024-12-14T12:50:01.102Z] Copying: 121/1024 [MB] (23 MBps) [2024-12-14T12:50:02.035Z] Copying: 151/1024 [MB] (29 MBps) [2024-12-14T12:50:02.968Z] Copying: 169/1024 [MB] (17 MBps) [2024-12-14T12:50:03.900Z] Copying: 192/1024 [MB] (23 MBps) [2024-12-14T12:50:04.834Z] Copying: 210/1024 [MB] (18 MBps) [2024-12-14T12:50:05.767Z] Copying: 226/1024 [MB] (15 MBps) [2024-12-14T12:50:06.777Z] Copying: 240/1024 [MB] (13 MBps) [2024-12-14T12:50:07.711Z] Copying: 259/1024 [MB] (19 MBps) [2024-12-14T12:50:09.085Z] Copying: 279/1024 [MB] (19 MBps) [2024-12-14T12:50:10.020Z] Copying: 297/1024 [MB] (17 MBps) [2024-12-14T12:50:10.960Z] Copying: 322/1024 [MB] (25 MBps) [2024-12-14T12:50:11.898Z] Copying: 339/1024 [MB] (16 MBps) [2024-12-14T12:50:12.833Z] Copying: 357/1024 [MB] (17 MBps) [2024-12-14T12:50:13.768Z] Copying: 376/1024 [MB] (18 MBps) [2024-12-14T12:50:14.713Z] Copying: 402/1024 [MB] (26 MBps) [2024-12-14T12:50:16.106Z] Copying: 414/1024 [MB] (12 MBps) [2024-12-14T12:50:17.051Z] Copying: 425/1024 [MB] (10 MBps) [2024-12-14T12:50:17.997Z] Copying: 435/1024 [MB] (10 MBps) [2024-12-14T12:50:18.941Z] Copying: 445/1024 [MB] (10 MBps) [2024-12-14T12:50:19.879Z] Copying: 458/1024 [MB] (12 MBps) [2024-12-14T12:50:20.812Z] Copying: 470/1024 [MB] (12 MBps) [2024-12-14T12:50:21.747Z] Copying: 490/1024 [MB] (20 MBps) [2024-12-14T12:50:23.123Z] Copying: 513/1024 [MB] (22 MBps) [2024-12-14T12:50:24.057Z] Copying: 539/1024 [MB] (25 MBps) [2024-12-14T12:50:24.992Z] Copying: 563/1024 [MB] (23 MBps) [2024-12-14T12:50:25.928Z] Copying: 583/1024 [MB] (20 MBps) [2024-12-14T12:50:26.869Z] Copying: 608/1024 [MB] (25 MBps) [2024-12-14T12:50:27.809Z] Copying: 626/1024 [MB] (18 MBps) [2024-12-14T12:50:28.750Z] Copying: 641/1024 [MB] (14 MBps) [2024-12-14T12:50:29.692Z] Copying: 657/1024 [MB] (16 MBps) [2024-12-14T12:50:31.077Z] Copying: 676/1024 [MB] (18 MBps) [2024-12-14T12:50:32.019Z] Copying: 691/1024 [MB] (14 MBps) [2024-12-14T12:50:32.963Z] Copying: 709/1024 [MB] (17 MBps) [2024-12-14T12:50:33.909Z] Copying: 723/1024 [MB] (14 MBps) [2024-12-14T12:50:34.855Z] Copying: 735/1024 [MB] (11 MBps) [2024-12-14T12:50:35.863Z] Copying: 753/1024 [MB] (18 MBps) [2024-12-14T12:50:36.808Z] Copying: 763/1024 [MB] (10 MBps) [2024-12-14T12:50:37.754Z] Copying: 775/1024 [MB] (11 MBps) [2024-12-14T12:50:38.699Z] Copying: 787/1024 [MB] (12 MBps) [2024-12-14T12:50:40.088Z] Copying: 807/1024 [MB] (20 MBps) [2024-12-14T12:50:41.032Z] Copying: 829/1024 [MB] (21 MBps) [2024-12-14T12:50:41.975Z] Copying: 846/1024 [MB] (16 MBps) [2024-12-14T12:50:42.920Z] Copying: 876/1024 [MB] (30 MBps) [2024-12-14T12:50:43.862Z] Copying: 888/1024 [MB] (11 MBps) [2024-12-14T12:50:44.805Z] Copying: 909/1024 [MB] (20 MBps) [2024-12-14T12:50:45.745Z] Copying: 935/1024 [MB] (26 MBps) [2024-12-14T12:50:46.692Z] Copying: 960/1024 [MB] (24 MBps) [2024-12-14T12:50:48.078Z] Copying: 970/1024 [MB] (10 MBps) [2024-12-14T12:50:49.021Z] Copying: 1003/1024 [MB] (33 MBps) [2024-12-14T12:50:49.593Z] Copying: 1023/1024 [MB] (19 MBps) [2024-12-14T12:50:49.593Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-14 12:50:49.366915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-12-14 12:50:49.366986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:49.856 [2024-12-14 12:50:49.367011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:49.856 [2024-12-14 12:50:49.367021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.856 [2024-12-14 12:50:49.367538] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:49.856 [2024-12-14 12:50:49.372784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-12-14 12:50:49.372836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:49.856 [2024-12-14 12:50:49.372849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.211 ms 00:24:49.856 [2024-12-14 12:50:49.372857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.856 [2024-12-14 12:50:49.388618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-12-14 12:50:49.388670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:49.856 [2024-12-14 12:50:49.388684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.945 ms 00:24:49.856 [2024-12-14 12:50:49.388700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.856 [2024-12-14 12:50:49.413735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-12-14 12:50:49.413785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:49.856 [2024-12-14 12:50:49.413799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.017 ms 00:24:49.856 [2024-12-14 12:50:49.413807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.856 [2024-12-14 12:50:49.419941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-12-14 12:50:49.419984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:49.856 [2024-12-14 12:50:49.419997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.095 ms 00:24:49.856 [2024-12-14 12:50:49.420013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.856 [2024-12-14 12:50:49.446741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-12-14 12:50:49.446804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:49.856 [2024-12-14 12:50:49.446817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.690 ms 00:24:49.856 [2024-12-14 12:50:49.446825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.856 [2024-12-14 12:50:49.462603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.856 [2024-12-14 12:50:49.462653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:49.856 [2024-12-14 12:50:49.462667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.732 ms 00:24:49.856 [2024-12-14 12:50:49.462676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.119 [2024-12-14 12:50:49.696331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.119 [2024-12-14 12:50:49.696399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:50.119 [2024-12-14 12:50:49.696413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 233.604 ms 00:24:50.119 [2024-12-14 12:50:49.696422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.119 [2024-12-14 12:50:49.722347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.119 [2024-12-14 12:50:49.722394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:50.119 [2024-12-14 12:50:49.722407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.908 ms 00:24:50.119 [2024-12-14 12:50:49.722414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.119 [2024-12-14 12:50:49.747893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.119 [2024-12-14 12:50:49.747942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:50.119 [2024-12-14 12:50:49.747953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.433 ms 00:24:50.119 [2024-12-14 12:50:49.747968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.119 [2024-12-14 12:50:49.772991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.119 [2024-12-14 12:50:49.773040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:50.119 [2024-12-14 12:50:49.773053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.978 ms 00:24:50.119 [2024-12-14 12:50:49.773078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.119 [2024-12-14 12:50:49.797862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.119 [2024-12-14 12:50:49.797910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:50.119 [2024-12-14 12:50:49.797922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.714 ms 00:24:50.119 [2024-12-14 12:50:49.797931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.119 [2024-12-14 12:50:49.797973] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:50.119 [2024-12-14 12:50:49.797989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 108288 / 261120 wr_cnt: 1 state: open 00:24:50.119 [2024-12-14 12:50:49.798000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:50.119 [2024-12-14 12:50:49.798343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:50.120 [2024-12-14 12:50:49.798810] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:50.120 [2024-12-14 12:50:49.798819] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c4ed0d54-cf59-4e49-a35e-1ffe69f88ff2 00:24:50.120 [2024-12-14 12:50:49.798828] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 108288 00:24:50.120 [2024-12-14 12:50:49.798836] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 109248 00:24:50.120 [2024-12-14 12:50:49.798844] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 108288 00:24:50.120 [2024-12-14 12:50:49.798853] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:24:50.120 [2024-12-14 12:50:49.798873] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:50.120 [2024-12-14 12:50:49.798881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:50.120 [2024-12-14 12:50:49.798889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:50.120 [2024-12-14 12:50:49.798896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:50.120 [2024-12-14 12:50:49.798904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:50.120 [2024-12-14 12:50:49.798911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.120 [2024-12-14 12:50:49.798919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:50.120 [2024-12-14 12:50:49.798928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:24:50.120 [2024-12-14 12:50:49.798935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.120 [2024-12-14 12:50:49.812414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.120 [2024-12-14 12:50:49.812460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:50.120 [2024-12-14 12:50:49.812478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.460 ms 00:24:50.120 [2024-12-14 12:50:49.812486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.120 [2024-12-14 12:50:49.812878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.120 [2024-12-14 12:50:49.812898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:50.120 [2024-12-14 12:50:49.812907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:24:50.120 [2024-12-14 12:50:49.812915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.120 [2024-12-14 12:50:49.848934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.120 [2024-12-14 12:50:49.848989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:50.120 [2024-12-14 12:50:49.849001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.120 [2024-12-14 12:50:49.849011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.120 [2024-12-14 12:50:49.849094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.120 [2024-12-14 12:50:49.849104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:50.120 [2024-12-14 12:50:49.849113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.120 [2024-12-14 12:50:49.849123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.120 [2024-12-14 12:50:49.849186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.120 [2024-12-14 12:50:49.849198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:50.120 [2024-12-14 12:50:49.849212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.120 [2024-12-14 12:50:49.849221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.120 [2024-12-14 12:50:49.849237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.120 [2024-12-14 12:50:49.849246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:50.120 [2024-12-14 12:50:49.849255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.120 [2024-12-14 12:50:49.849264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.382 [2024-12-14 12:50:49.934234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.382 [2024-12-14 12:50:49.934294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:50.382 [2024-12-14 12:50:49.934307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.382 [2024-12-14 12:50:49.934316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.382 [2024-12-14 12:50:50.004079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.382 [2024-12-14 12:50:50.004137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:50.382 [2024-12-14 12:50:50.004150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.382 [2024-12-14 12:50:50.004158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.382 [2024-12-14 12:50:50.004234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.382 [2024-12-14 12:50:50.004245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:50.382 [2024-12-14 12:50:50.004254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.382 [2024-12-14 12:50:50.004269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.382 [2024-12-14 12:50:50.004308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.382 [2024-12-14 12:50:50.004318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:50.382 [2024-12-14 12:50:50.004327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.382 [2024-12-14 12:50:50.004335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.382 [2024-12-14 12:50:50.004436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.382 [2024-12-14 12:50:50.004447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:50.382 [2024-12-14 12:50:50.004455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.382 [2024-12-14 12:50:50.004467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.382 [2024-12-14 12:50:50.004498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.382 [2024-12-14 12:50:50.004508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:50.382 [2024-12-14 12:50:50.004516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.382 [2024-12-14 12:50:50.004524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.382 [2024-12-14 12:50:50.004565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.382 [2024-12-14 12:50:50.004574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:50.382 [2024-12-14 12:50:50.004583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.382 [2024-12-14 12:50:50.004591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.382 [2024-12-14 12:50:50.004642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:50.382 [2024-12-14 12:50:50.004653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:50.382 [2024-12-14 12:50:50.004662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:50.382 [2024-12-14 12:50:50.004669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.382 [2024-12-14 12:50:50.004802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 638.538 ms, result 0 00:24:51.770 00:24:51.770 00:24:51.770 12:50:51 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:51.771 [2024-12-14 12:50:51.440381] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:51.771 [2024-12-14 12:50:51.440535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81310 ] 00:24:52.032 [2024-12-14 12:50:51.604618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.032 [2024-12-14 12:50:51.724160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.292 [2024-12-14 12:50:52.019237] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:52.292 [2024-12-14 12:50:52.019326] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:52.554 [2024-12-14 12:50:52.181107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.554 [2024-12-14 12:50:52.181172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:52.554 [2024-12-14 12:50:52.181188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:52.554 [2024-12-14 12:50:52.181197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.554 [2024-12-14 12:50:52.181252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.554 [2024-12-14 12:50:52.181265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:52.554 [2024-12-14 12:50:52.181274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:52.554 [2024-12-14 12:50:52.181282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.554 [2024-12-14 12:50:52.181303] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:52.554 [2024-12-14 12:50:52.182146] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:52.554 [2024-12-14 12:50:52.182189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.554 [2024-12-14 12:50:52.182197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:52.554 [2024-12-14 12:50:52.182207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:24:52.554 [2024-12-14 12:50:52.182214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.554 [2024-12-14 12:50:52.183877] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:52.554 [2024-12-14 12:50:52.198203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.554 [2024-12-14 12:50:52.198250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:52.554 [2024-12-14 12:50:52.198262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.328 ms 00:24:52.554 [2024-12-14 12:50:52.198270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.555 [2024-12-14 12:50:52.198348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.555 [2024-12-14 12:50:52.198358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:52.555 [2024-12-14 12:50:52.198367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:52.555 [2024-12-14 12:50:52.198375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.555 [2024-12-14 12:50:52.206171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.555 [2024-12-14 12:50:52.206213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:52.555 [2024-12-14 12:50:52.206224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.719 ms 00:24:52.555 [2024-12-14 12:50:52.206239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.555 [2024-12-14 12:50:52.206320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.555 [2024-12-14 12:50:52.206329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:52.555 [2024-12-14 12:50:52.206338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:52.555 [2024-12-14 12:50:52.206346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.555 [2024-12-14 12:50:52.206390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.555 [2024-12-14 12:50:52.206401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:52.555 [2024-12-14 12:50:52.206410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:52.555 [2024-12-14 12:50:52.206419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.555 [2024-12-14 12:50:52.206446] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:52.555 [2024-12-14 12:50:52.210573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.555 [2024-12-14 12:50:52.210611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:52.555 [2024-12-14 12:50:52.210624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.134 ms 00:24:52.555 [2024-12-14 12:50:52.210632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.555 [2024-12-14 12:50:52.210672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.555 [2024-12-14 12:50:52.210680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:52.555 [2024-12-14 12:50:52.210689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:52.555 [2024-12-14 12:50:52.210697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.555 [2024-12-14 12:50:52.210747] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:52.555 [2024-12-14 12:50:52.210772] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:52.555 [2024-12-14 12:50:52.210808] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:52.555 [2024-12-14 12:50:52.210828] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:52.555 [2024-12-14 12:50:52.210933] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:52.555 [2024-12-14 12:50:52.210944] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:52.555 [2024-12-14 12:50:52.210955] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:52.555 [2024-12-14 12:50:52.210965] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:52.555 [2024-12-14 12:50:52.210974] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:52.555 [2024-12-14 12:50:52.210983] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:52.555 [2024-12-14 12:50:52.210991] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:52.555 [2024-12-14 12:50:52.210998] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:52.555 [2024-12-14 12:50:52.211009] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:52.555 [2024-12-14 12:50:52.211017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.555 [2024-12-14 12:50:52.211026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:52.555 [2024-12-14 12:50:52.211033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:24:52.555 [2024-12-14 12:50:52.211040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.555 [2024-12-14 12:50:52.211138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.555 [2024-12-14 12:50:52.211148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:52.555 [2024-12-14 12:50:52.211155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:52.555 [2024-12-14 12:50:52.211163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.555 [2024-12-14 12:50:52.211266] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:52.555 [2024-12-14 12:50:52.211278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:52.555 [2024-12-14 12:50:52.211286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:52.555 [2024-12-14 12:50:52.211295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.555 [2024-12-14 12:50:52.211303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:52.555 [2024-12-14 12:50:52.211309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:52.555 [2024-12-14 12:50:52.211316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:52.555 [2024-12-14 12:50:52.211323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:52.555 [2024-12-14 12:50:52.211330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:52.555 [2024-12-14 12:50:52.211336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:52.555 [2024-12-14 12:50:52.211343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:52.555 [2024-12-14 12:50:52.211350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:52.555 [2024-12-14 12:50:52.211357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:52.555 [2024-12-14 12:50:52.211371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:52.555 [2024-12-14 12:50:52.211381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:52.555 [2024-12-14 12:50:52.211388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.555 [2024-12-14 12:50:52.211396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:52.555 [2024-12-14 12:50:52.211403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:52.555 [2024-12-14 12:50:52.211410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.555 [2024-12-14 12:50:52.211417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:52.555 [2024-12-14 12:50:52.211424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:52.555 [2024-12-14 12:50:52.211432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.555 [2024-12-14 12:50:52.211440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:52.555 [2024-12-14 12:50:52.211447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:52.555 [2024-12-14 12:50:52.211454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.555 [2024-12-14 12:50:52.211462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:52.555 [2024-12-14 12:50:52.211469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:52.555 [2024-12-14 12:50:52.211475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.555 [2024-12-14 12:50:52.211483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:52.555 [2024-12-14 12:50:52.211490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:52.555 [2024-12-14 12:50:52.211497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:52.555 [2024-12-14 12:50:52.211504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:52.555 [2024-12-14 12:50:52.211511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:52.555 [2024-12-14 12:50:52.211518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:52.555 [2024-12-14 12:50:52.211525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:52.555 [2024-12-14 12:50:52.211531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:52.555 [2024-12-14 12:50:52.211538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:52.555 [2024-12-14 12:50:52.211546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:52.555 [2024-12-14 12:50:52.211553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:52.555 [2024-12-14 12:50:52.211559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.555 [2024-12-14 12:50:52.211567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:52.555 [2024-12-14 12:50:52.211574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:52.555 [2024-12-14 12:50:52.211580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.555 [2024-12-14 12:50:52.211587] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:52.555 [2024-12-14 12:50:52.211596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:52.555 [2024-12-14 12:50:52.211604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:52.555 [2024-12-14 12:50:52.211613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.555 [2024-12-14 12:50:52.211621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:52.555 [2024-12-14 12:50:52.211628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:52.555 [2024-12-14 12:50:52.211635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:52.555 [2024-12-14 12:50:52.211642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:52.555 [2024-12-14 12:50:52.211648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:52.555 [2024-12-14 12:50:52.211655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:52.555 [2024-12-14 12:50:52.211664] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:52.555 [2024-12-14 12:50:52.211673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:52.555 [2024-12-14 12:50:52.211685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:52.555 [2024-12-14 12:50:52.211692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:52.555 [2024-12-14 12:50:52.211699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:52.555 [2024-12-14 12:50:52.211706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:52.555 [2024-12-14 12:50:52.211714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:52.556 [2024-12-14 12:50:52.211722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:52.556 [2024-12-14 12:50:52.211729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:52.556 [2024-12-14 12:50:52.211736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:52.556 [2024-12-14 12:50:52.211743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:52.556 [2024-12-14 12:50:52.211749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:52.556 [2024-12-14 12:50:52.211757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:52.556 [2024-12-14 12:50:52.211763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:52.556 [2024-12-14 12:50:52.211770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:52.556 [2024-12-14 12:50:52.211778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:52.556 [2024-12-14 12:50:52.211785] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:52.556 [2024-12-14 12:50:52.211793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:52.556 [2024-12-14 12:50:52.211801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:52.556 [2024-12-14 12:50:52.211808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:52.556 [2024-12-14 12:50:52.211815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:52.556 [2024-12-14 12:50:52.211822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:52.556 [2024-12-14 12:50:52.211829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.556 [2024-12-14 12:50:52.211837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:52.556 [2024-12-14 12:50:52.211845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:24:52.556 [2024-12-14 12:50:52.211855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.556 [2024-12-14 12:50:52.243496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.556 [2024-12-14 12:50:52.243549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:52.556 [2024-12-14 12:50:52.243561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.597 ms 00:24:52.556 [2024-12-14 12:50:52.243575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.556 [2024-12-14 12:50:52.243661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.556 [2024-12-14 12:50:52.243670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:52.556 [2024-12-14 12:50:52.243679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:52.556 [2024-12-14 12:50:52.243687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.817 [2024-12-14 12:50:52.292085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.817 [2024-12-14 12:50:52.292144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:52.817 [2024-12-14 12:50:52.292159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.338 ms 00:24:52.817 [2024-12-14 12:50:52.292168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.817 [2024-12-14 12:50:52.292216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.817 [2024-12-14 12:50:52.292227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:52.817 [2024-12-14 12:50:52.292240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:52.817 [2024-12-14 12:50:52.292248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.817 [2024-12-14 12:50:52.292840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.817 [2024-12-14 12:50:52.292884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:52.817 [2024-12-14 12:50:52.292895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:24:52.817 [2024-12-14 12:50:52.292903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.817 [2024-12-14 12:50:52.293079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.817 [2024-12-14 12:50:52.293092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:52.817 [2024-12-14 12:50:52.293105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:24:52.817 [2024-12-14 12:50:52.293113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.817 [2024-12-14 12:50:52.308958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.817 [2024-12-14 12:50:52.309012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:52.817 [2024-12-14 12:50:52.309024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.823 ms 00:24:52.817 [2024-12-14 12:50:52.309032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.817 [2024-12-14 12:50:52.323425] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:52.817 [2024-12-14 12:50:52.323474] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:52.817 [2024-12-14 12:50:52.323487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.817 [2024-12-14 12:50:52.323496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:52.818 [2024-12-14 12:50:52.323505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.332 ms 00:24:52.818 [2024-12-14 12:50:52.323513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.818 [2024-12-14 12:50:52.349219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.818 [2024-12-14 12:50:52.349272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:52.818 [2024-12-14 12:50:52.349285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.654 ms 00:24:52.818 [2024-12-14 12:50:52.349292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.818 [2024-12-14 12:50:52.362032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.818 [2024-12-14 12:50:52.362088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:52.818 [2024-12-14 12:50:52.362099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.686 ms 00:24:52.818 [2024-12-14 12:50:52.362107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.818 [2024-12-14 12:50:52.374472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.818 [2024-12-14 12:50:52.374519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:52.818 [2024-12-14 12:50:52.374530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.319 ms 00:24:52.818 [2024-12-14 12:50:52.374537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.818 [2024-12-14 12:50:52.375195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.818 [2024-12-14 12:50:52.375227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:52.818 [2024-12-14 12:50:52.375241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:24:52.818 [2024-12-14 12:50:52.375248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.818 [2024-12-14 12:50:52.439240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.818 [2024-12-14 12:50:52.439310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:52.818 [2024-12-14 12:50:52.439332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.970 ms 00:24:52.818 [2024-12-14 12:50:52.439342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.818 [2024-12-14 12:50:52.450497] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:52.818 [2024-12-14 12:50:52.453295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.818 [2024-12-14 12:50:52.453336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:52.818 [2024-12-14 12:50:52.453348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.898 ms 00:24:52.818 [2024-12-14 12:50:52.453356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.818 [2024-12-14 12:50:52.453441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.818 [2024-12-14 12:50:52.453453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:52.818 [2024-12-14 12:50:52.453463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:52.818 [2024-12-14 12:50:52.453475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.818 [2024-12-14 12:50:52.455269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.818 [2024-12-14 12:50:52.455312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:52.818 [2024-12-14 12:50:52.455325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.741 ms 00:24:52.818 [2024-12-14 12:50:52.455334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.818 [2024-12-14 12:50:52.455369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.818 [2024-12-14 12:50:52.455379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:52.818 [2024-12-14 12:50:52.455388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:52.818 [2024-12-14 12:50:52.455397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.818 [2024-12-14 12:50:52.455439] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:52.818 [2024-12-14 12:50:52.455451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.818 [2024-12-14 12:50:52.455460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:52.818 [2024-12-14 12:50:52.455469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:52.818 [2024-12-14 12:50:52.455477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.818 [2024-12-14 12:50:52.480696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.818 [2024-12-14 12:50:52.480751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:52.818 [2024-12-14 12:50:52.480770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.199 ms 00:24:52.818 [2024-12-14 12:50:52.480779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.818 [2024-12-14 12:50:52.480864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.818 [2024-12-14 12:50:52.480876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:52.818 [2024-12-14 12:50:52.480885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:52.818 [2024-12-14 12:50:52.480893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.818 [2024-12-14 12:50:52.482261] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 300.662 ms, result 0 00:24:54.203  [2024-12-14T12:50:54.883Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-14T12:50:55.825Z] Copying: 38/1024 [MB] (23 MBps) [2024-12-14T12:50:56.769Z] Copying: 51/1024 [MB] (13 MBps) [2024-12-14T12:50:57.719Z] Copying: 64/1024 [MB] (12 MBps) [2024-12-14T12:50:59.104Z] Copying: 85/1024 [MB] (21 MBps) [2024-12-14T12:50:59.675Z] Copying: 103/1024 [MB] (18 MBps) [2024-12-14T12:51:01.059Z] Copying: 125/1024 [MB] (21 MBps) [2024-12-14T12:51:02.000Z] Copying: 146/1024 [MB] (21 MBps) [2024-12-14T12:51:02.943Z] Copying: 173/1024 [MB] (26 MBps) [2024-12-14T12:51:03.889Z] Copying: 196/1024 [MB] (23 MBps) [2024-12-14T12:51:04.894Z] Copying: 213/1024 [MB] (16 MBps) [2024-12-14T12:51:05.838Z] Copying: 232/1024 [MB] (18 MBps) [2024-12-14T12:51:06.783Z] Copying: 242/1024 [MB] (10 MBps) [2024-12-14T12:51:07.726Z] Copying: 253/1024 [MB] (10 MBps) [2024-12-14T12:51:09.113Z] Copying: 263/1024 [MB] (10 MBps) [2024-12-14T12:51:09.685Z] Copying: 274/1024 [MB] (10 MBps) [2024-12-14T12:51:11.074Z] Copying: 288/1024 [MB] (13 MBps) [2024-12-14T12:51:12.017Z] Copying: 303/1024 [MB] (15 MBps) [2024-12-14T12:51:12.961Z] Copying: 316/1024 [MB] (12 MBps) [2024-12-14T12:51:13.905Z] Copying: 331/1024 [MB] (15 MBps) [2024-12-14T12:51:14.849Z] Copying: 343/1024 [MB] (12 MBps) [2024-12-14T12:51:15.792Z] Copying: 360/1024 [MB] (16 MBps) [2024-12-14T12:51:16.735Z] Copying: 372/1024 [MB] (11 MBps) [2024-12-14T12:51:17.679Z] Copying: 392/1024 [MB] (20 MBps) [2024-12-14T12:51:19.066Z] Copying: 402/1024 [MB] (10 MBps) [2024-12-14T12:51:20.010Z] Copying: 423/1024 [MB] (20 MBps) [2024-12-14T12:51:20.954Z] Copying: 438/1024 [MB] (15 MBps) [2024-12-14T12:51:21.898Z] Copying: 449/1024 [MB] (11 MBps) [2024-12-14T12:51:22.841Z] Copying: 465/1024 [MB] (15 MBps) [2024-12-14T12:51:23.787Z] Copying: 481/1024 [MB] (16 MBps) [2024-12-14T12:51:24.731Z] Copying: 500/1024 [MB] (18 MBps) [2024-12-14T12:51:25.679Z] Copying: 529/1024 [MB] (28 MBps) [2024-12-14T12:51:27.065Z] Copying: 548/1024 [MB] (19 MBps) [2024-12-14T12:51:28.010Z] Copying: 562/1024 [MB] (13 MBps) [2024-12-14T12:51:28.958Z] Copying: 579/1024 [MB] (17 MBps) [2024-12-14T12:51:29.936Z] Copying: 599/1024 [MB] (20 MBps) [2024-12-14T12:51:30.880Z] Copying: 623/1024 [MB] (23 MBps) [2024-12-14T12:51:31.822Z] Copying: 650/1024 [MB] (27 MBps) [2024-12-14T12:51:32.763Z] Copying: 660/1024 [MB] (10 MBps) [2024-12-14T12:51:33.705Z] Copying: 671/1024 [MB] (10 MBps) [2024-12-14T12:51:35.092Z] Copying: 681/1024 [MB] (10 MBps) [2024-12-14T12:51:36.032Z] Copying: 692/1024 [MB] (10 MBps) [2024-12-14T12:51:36.974Z] Copying: 702/1024 [MB] (10 MBps) [2024-12-14T12:51:37.917Z] Copying: 713/1024 [MB] (10 MBps) [2024-12-14T12:51:38.861Z] Copying: 728/1024 [MB] (14 MBps) [2024-12-14T12:51:39.805Z] Copying: 747/1024 [MB] (19 MBps) [2024-12-14T12:51:40.749Z] Copying: 763/1024 [MB] (15 MBps) [2024-12-14T12:51:41.690Z] Copying: 782/1024 [MB] (19 MBps) [2024-12-14T12:51:43.077Z] Copying: 797/1024 [MB] (14 MBps) [2024-12-14T12:51:44.021Z] Copying: 816/1024 [MB] (19 MBps) [2024-12-14T12:51:44.964Z] Copying: 834/1024 [MB] (17 MBps) [2024-12-14T12:51:45.905Z] Copying: 851/1024 [MB] (17 MBps) [2024-12-14T12:51:46.851Z] Copying: 868/1024 [MB] (16 MBps) [2024-12-14T12:51:47.795Z] Copying: 887/1024 [MB] (19 MBps) [2024-12-14T12:51:48.738Z] Copying: 903/1024 [MB] (16 MBps) [2024-12-14T12:51:49.682Z] Copying: 922/1024 [MB] (18 MBps) [2024-12-14T12:51:51.070Z] Copying: 938/1024 [MB] (16 MBps) [2024-12-14T12:51:52.014Z] Copying: 949/1024 [MB] (10 MBps) [2024-12-14T12:51:52.959Z] Copying: 960/1024 [MB] (10 MBps) [2024-12-14T12:51:53.903Z] Copying: 970/1024 [MB] (10 MBps) [2024-12-14T12:51:54.847Z] Copying: 987/1024 [MB] (16 MBps) [2024-12-14T12:51:55.809Z] Copying: 999/1024 [MB] (12 MBps) [2024-12-14T12:51:56.388Z] Copying: 1014/1024 [MB] (14 MBps) [2024-12-14T12:51:56.962Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-14 12:51:56.711409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.225 [2024-12-14 12:51:56.711496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:57.225 [2024-12-14 12:51:56.711513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:57.225 [2024-12-14 12:51:56.711528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.225 [2024-12-14 12:51:56.711554] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:57.225 [2024-12-14 12:51:56.715066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.225 [2024-12-14 12:51:56.715110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:57.225 [2024-12-14 12:51:56.715122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.495 ms 00:25:57.225 [2024-12-14 12:51:56.715133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.225 [2024-12-14 12:51:56.715373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.225 [2024-12-14 12:51:56.715385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:57.225 [2024-12-14 12:51:56.715394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:25:57.225 [2024-12-14 12:51:56.715409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.225 [2024-12-14 12:51:56.722122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.225 [2024-12-14 12:51:56.722173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:57.225 [2024-12-14 12:51:56.722185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.695 ms 00:25:57.225 [2024-12-14 12:51:56.722193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.225 [2024-12-14 12:51:56.728892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.225 [2024-12-14 12:51:56.728936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:57.225 [2024-12-14 12:51:56.728949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.651 ms 00:25:57.225 [2024-12-14 12:51:56.728965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.225 [2024-12-14 12:51:56.756700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.225 [2024-12-14 12:51:56.756751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:57.225 [2024-12-14 12:51:56.756765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.692 ms 00:25:57.225 [2024-12-14 12:51:56.756773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.225 [2024-12-14 12:51:56.773238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.226 [2024-12-14 12:51:56.773288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:57.226 [2024-12-14 12:51:56.773303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.415 ms 00:25:57.226 [2024-12-14 12:51:56.773311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.488 [2024-12-14 12:51:57.033652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.488 [2024-12-14 12:51:57.033713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:57.488 [2024-12-14 12:51:57.033728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 260.283 ms 00:25:57.488 [2024-12-14 12:51:57.033737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.488 [2024-12-14 12:51:57.059846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.488 [2024-12-14 12:51:57.059898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:57.488 [2024-12-14 12:51:57.059911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.091 ms 00:25:57.488 [2024-12-14 12:51:57.059920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.488 [2024-12-14 12:51:57.085563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.488 [2024-12-14 12:51:57.085613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:57.488 [2024-12-14 12:51:57.085625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.597 ms 00:25:57.488 [2024-12-14 12:51:57.085632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.488 [2024-12-14 12:51:57.110541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.488 [2024-12-14 12:51:57.110586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:57.488 [2024-12-14 12:51:57.110599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.864 ms 00:25:57.488 [2024-12-14 12:51:57.110606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.488 [2024-12-14 12:51:57.135526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.488 [2024-12-14 12:51:57.135575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:57.488 [2024-12-14 12:51:57.135588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.848 ms 00:25:57.488 [2024-12-14 12:51:57.135596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.488 [2024-12-14 12:51:57.135639] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:57.488 [2024-12-14 12:51:57.135655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:57.488 [2024-12-14 12:51:57.135666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.135997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.136004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:57.488 [2024-12-14 12:51:57.136011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:57.489 [2024-12-14 12:51:57.136471] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:57.489 [2024-12-14 12:51:57.136479] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c4ed0d54-cf59-4e49-a35e-1ffe69f88ff2 00:25:57.489 [2024-12-14 12:51:57.136488] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:57.489 [2024-12-14 12:51:57.136495] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 23744 00:25:57.489 [2024-12-14 12:51:57.136503] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 22784 00:25:57.489 [2024-12-14 12:51:57.136513] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0421 00:25:57.489 [2024-12-14 12:51:57.136526] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:57.489 [2024-12-14 12:51:57.136542] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:57.489 [2024-12-14 12:51:57.136549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:57.489 [2024-12-14 12:51:57.136556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:57.489 [2024-12-14 12:51:57.136563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:57.489 [2024-12-14 12:51:57.136571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.489 [2024-12-14 12:51:57.136579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:57.489 [2024-12-14 12:51:57.136588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:25:57.489 [2024-12-14 12:51:57.136595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.489 [2024-12-14 12:51:57.150051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.489 [2024-12-14 12:51:57.150102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:57.489 [2024-12-14 12:51:57.150120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.436 ms 00:25:57.489 [2024-12-14 12:51:57.150129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.489 [2024-12-14 12:51:57.150531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.489 [2024-12-14 12:51:57.150554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:57.489 [2024-12-14 12:51:57.150564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:25:57.489 [2024-12-14 12:51:57.150572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.489 [2024-12-14 12:51:57.186812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.489 [2024-12-14 12:51:57.186871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.489 [2024-12-14 12:51:57.186883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.489 [2024-12-14 12:51:57.186894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.489 [2024-12-14 12:51:57.186961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.489 [2024-12-14 12:51:57.186972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.489 [2024-12-14 12:51:57.186982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.489 [2024-12-14 12:51:57.186992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.489 [2024-12-14 12:51:57.187074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.489 [2024-12-14 12:51:57.187087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.489 [2024-12-14 12:51:57.187102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.489 [2024-12-14 12:51:57.187111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.489 [2024-12-14 12:51:57.187128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.489 [2024-12-14 12:51:57.187138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.489 [2024-12-14 12:51:57.187147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.489 [2024-12-14 12:51:57.187156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.752 [2024-12-14 12:51:57.271294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.752 [2024-12-14 12:51:57.271360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.752 [2024-12-14 12:51:57.271374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.752 [2024-12-14 12:51:57.271383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.752 [2024-12-14 12:51:57.341587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.752 [2024-12-14 12:51:57.341636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.752 [2024-12-14 12:51:57.341649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.752 [2024-12-14 12:51:57.341658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.752 [2024-12-14 12:51:57.341734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.752 [2024-12-14 12:51:57.341745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.752 [2024-12-14 12:51:57.341755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.752 [2024-12-14 12:51:57.341771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.752 [2024-12-14 12:51:57.341809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.752 [2024-12-14 12:51:57.341820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.752 [2024-12-14 12:51:57.341828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.752 [2024-12-14 12:51:57.341837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.752 [2024-12-14 12:51:57.341938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.752 [2024-12-14 12:51:57.341949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.752 [2024-12-14 12:51:57.341957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.752 [2024-12-14 12:51:57.341965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.752 [2024-12-14 12:51:57.342000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.752 [2024-12-14 12:51:57.342011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:57.752 [2024-12-14 12:51:57.342019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.752 [2024-12-14 12:51:57.342027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.752 [2024-12-14 12:51:57.342094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.752 [2024-12-14 12:51:57.342104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.752 [2024-12-14 12:51:57.342113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.752 [2024-12-14 12:51:57.342121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.752 [2024-12-14 12:51:57.342174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:57.752 [2024-12-14 12:51:57.342185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.752 [2024-12-14 12:51:57.342194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:57.752 [2024-12-14 12:51:57.342203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.752 [2024-12-14 12:51:57.342338] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 630.903 ms, result 0 00:25:58.696 00:25:58.696 00:25:58.696 12:51:58 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:01.245 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:01.245 12:52:00 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:01.245 12:52:00 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:01.245 12:52:00 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:01.245 12:52:00 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:01.245 12:52:00 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:01.245 Process with pid 79035 is not found 00:26:01.245 Remove shared memory files 00:26:01.245 12:52:00 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79035 00:26:01.245 12:52:00 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79035 ']' 00:26:01.245 12:52:00 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79035 00:26:01.245 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79035) - No such process 00:26:01.245 12:52:00 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79035 is not found' 00:26:01.245 12:52:00 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:01.245 12:52:00 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:01.245 12:52:00 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:01.245 12:52:00 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:01.245 12:52:00 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:01.245 12:52:00 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:01.245 12:52:00 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:01.245 00:26:01.245 real 4m44.498s 00:26:01.245 user 4m31.328s 00:26:01.245 sys 0m12.851s 00:26:01.246 12:52:00 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.246 12:52:00 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:01.246 ************************************ 00:26:01.246 END TEST ftl_restore 00:26:01.246 ************************************ 00:26:01.246 12:52:00 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:01.246 12:52:00 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:01.246 12:52:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.246 12:52:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:01.246 ************************************ 00:26:01.246 START TEST ftl_dirty_shutdown 00:26:01.246 ************************************ 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:01.246 * Looking for test storage... 00:26:01.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:01.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.246 --rc genhtml_branch_coverage=1 00:26:01.246 --rc genhtml_function_coverage=1 00:26:01.246 --rc genhtml_legend=1 00:26:01.246 --rc geninfo_all_blocks=1 00:26:01.246 --rc geninfo_unexecuted_blocks=1 00:26:01.246 00:26:01.246 ' 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:01.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.246 --rc genhtml_branch_coverage=1 00:26:01.246 --rc genhtml_function_coverage=1 00:26:01.246 --rc genhtml_legend=1 00:26:01.246 --rc geninfo_all_blocks=1 00:26:01.246 --rc geninfo_unexecuted_blocks=1 00:26:01.246 00:26:01.246 ' 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:01.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.246 --rc genhtml_branch_coverage=1 00:26:01.246 --rc genhtml_function_coverage=1 00:26:01.246 --rc genhtml_legend=1 00:26:01.246 --rc geninfo_all_blocks=1 00:26:01.246 --rc geninfo_unexecuted_blocks=1 00:26:01.246 00:26:01.246 ' 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:01.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.246 --rc genhtml_branch_coverage=1 00:26:01.246 --rc genhtml_function_coverage=1 00:26:01.246 --rc genhtml_legend=1 00:26:01.246 --rc geninfo_all_blocks=1 00:26:01.246 --rc geninfo_unexecuted_blocks=1 00:26:01.246 00:26:01.246 ' 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=82080 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 82080 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82080 ']' 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.246 12:52:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:01.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.247 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.247 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.247 12:52:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:01.247 [2024-12-14 12:52:00.863254] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:01.247 [2024-12-14 12:52:00.863844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82080 ] 00:26:01.508 [2024-12-14 12:52:01.028254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.508 [2024-12-14 12:52:01.146852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.450 12:52:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.450 12:52:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:02.450 12:52:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:02.450 12:52:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:26:02.450 12:52:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:02.450 12:52:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:26:02.450 12:52:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:02.450 12:52:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:02.450 12:52:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:02.450 12:52:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:02.450 12:52:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:02.450 12:52:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:02.450 12:52:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:02.450 12:52:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:02.450 12:52:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:02.450 12:52:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:02.711 12:52:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:02.711 { 00:26:02.711 "name": "nvme0n1", 00:26:02.711 "aliases": [ 00:26:02.711 "523afa14-85b8-49a9-b09b-015fca49d561" 00:26:02.711 ], 00:26:02.711 "product_name": "NVMe disk", 00:26:02.712 "block_size": 4096, 00:26:02.712 "num_blocks": 1310720, 00:26:02.712 "uuid": "523afa14-85b8-49a9-b09b-015fca49d561", 00:26:02.712 "numa_id": -1, 00:26:02.712 "assigned_rate_limits": { 00:26:02.712 "rw_ios_per_sec": 0, 00:26:02.712 "rw_mbytes_per_sec": 0, 00:26:02.712 "r_mbytes_per_sec": 0, 00:26:02.712 "w_mbytes_per_sec": 0 00:26:02.712 }, 00:26:02.712 "claimed": true, 00:26:02.712 "claim_type": "read_many_write_one", 00:26:02.712 "zoned": false, 00:26:02.712 "supported_io_types": { 00:26:02.712 "read": true, 00:26:02.712 "write": true, 00:26:02.712 "unmap": true, 00:26:02.712 "flush": true, 00:26:02.712 "reset": true, 00:26:02.712 "nvme_admin": true, 00:26:02.712 "nvme_io": true, 00:26:02.712 "nvme_io_md": false, 00:26:02.712 "write_zeroes": true, 00:26:02.712 "zcopy": false, 00:26:02.712 "get_zone_info": false, 00:26:02.712 "zone_management": false, 00:26:02.712 "zone_append": false, 00:26:02.712 "compare": true, 00:26:02.712 "compare_and_write": false, 00:26:02.712 "abort": true, 00:26:02.712 "seek_hole": false, 00:26:02.712 "seek_data": false, 00:26:02.712 "copy": true, 00:26:02.712 "nvme_iov_md": false 00:26:02.712 }, 00:26:02.712 "driver_specific": { 00:26:02.712 "nvme": [ 00:26:02.712 { 00:26:02.712 "pci_address": "0000:00:11.0", 00:26:02.712 "trid": { 00:26:02.712 "trtype": "PCIe", 00:26:02.712 "traddr": "0000:00:11.0" 00:26:02.712 }, 00:26:02.712 "ctrlr_data": { 00:26:02.712 "cntlid": 0, 00:26:02.712 "vendor_id": "0x1b36", 00:26:02.712 "model_number": "QEMU NVMe Ctrl", 00:26:02.712 "serial_number": "12341", 00:26:02.712 "firmware_revision": "8.0.0", 00:26:02.712 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:02.712 "oacs": { 00:26:02.712 "security": 0, 00:26:02.712 "format": 1, 00:26:02.712 "firmware": 0, 00:26:02.712 "ns_manage": 1 00:26:02.712 }, 00:26:02.712 "multi_ctrlr": false, 00:26:02.712 "ana_reporting": false 00:26:02.712 }, 00:26:02.712 "vs": { 00:26:02.712 "nvme_version": "1.4" 00:26:02.712 }, 00:26:02.712 "ns_data": { 00:26:02.712 "id": 1, 00:26:02.712 "can_share": false 00:26:02.712 } 00:26:02.712 } 00:26:02.712 ], 00:26:02.712 "mp_policy": "active_passive" 00:26:02.712 } 00:26:02.712 } 00:26:02.712 ]' 00:26:02.712 12:52:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:02.712 12:52:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:02.712 12:52:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:02.712 12:52:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:02.712 12:52:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:02.712 12:52:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:02.712 12:52:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:02.712 12:52:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:02.712 12:52:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:02.712 12:52:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:02.712 12:52:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:02.974 12:52:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=433d7624-babd-4c15-9a6d-1db882fc8857 00:26:02.974 12:52:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:02.974 12:52:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 433d7624-babd-4c15-9a6d-1db882fc8857 00:26:03.234 12:52:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:03.496 12:52:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=2a6c18bc-2997-42a2-a6e9-1edd0fd53d24 00:26:03.496 12:52:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2a6c18bc-2997-42a2-a6e9-1edd0fd53d24 00:26:03.757 12:52:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=6c393982-53de-41b6-bbb8-81421fc7fb29 00:26:03.757 12:52:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:26:03.757 12:52:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6c393982-53de-41b6-bbb8-81421fc7fb29 00:26:03.757 12:52:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:26:03.757 12:52:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:03.757 12:52:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=6c393982-53de-41b6-bbb8-81421fc7fb29 00:26:03.757 12:52:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:26:03.757 12:52:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 6c393982-53de-41b6-bbb8-81421fc7fb29 00:26:03.757 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=6c393982-53de-41b6-bbb8-81421fc7fb29 00:26:03.757 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:03.757 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:03.757 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:03.757 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6c393982-53de-41b6-bbb8-81421fc7fb29 00:26:04.018 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:04.018 { 00:26:04.018 "name": "6c393982-53de-41b6-bbb8-81421fc7fb29", 00:26:04.018 "aliases": [ 00:26:04.018 "lvs/nvme0n1p0" 00:26:04.018 ], 00:26:04.018 "product_name": "Logical Volume", 00:26:04.018 "block_size": 4096, 00:26:04.018 "num_blocks": 26476544, 00:26:04.018 "uuid": "6c393982-53de-41b6-bbb8-81421fc7fb29", 00:26:04.018 "assigned_rate_limits": { 00:26:04.018 "rw_ios_per_sec": 0, 00:26:04.018 "rw_mbytes_per_sec": 0, 00:26:04.018 "r_mbytes_per_sec": 0, 00:26:04.018 "w_mbytes_per_sec": 0 00:26:04.018 }, 00:26:04.018 "claimed": false, 00:26:04.018 "zoned": false, 00:26:04.018 "supported_io_types": { 00:26:04.018 "read": true, 00:26:04.018 "write": true, 00:26:04.018 "unmap": true, 00:26:04.018 "flush": false, 00:26:04.018 "reset": true, 00:26:04.018 "nvme_admin": false, 00:26:04.018 "nvme_io": false, 00:26:04.018 "nvme_io_md": false, 00:26:04.018 "write_zeroes": true, 00:26:04.018 "zcopy": false, 00:26:04.018 "get_zone_info": false, 00:26:04.018 "zone_management": false, 00:26:04.018 "zone_append": false, 00:26:04.018 "compare": false, 00:26:04.018 "compare_and_write": false, 00:26:04.018 "abort": false, 00:26:04.018 "seek_hole": true, 00:26:04.018 "seek_data": true, 00:26:04.018 "copy": false, 00:26:04.018 "nvme_iov_md": false 00:26:04.018 }, 00:26:04.018 "driver_specific": { 00:26:04.018 "lvol": { 00:26:04.018 "lvol_store_uuid": "2a6c18bc-2997-42a2-a6e9-1edd0fd53d24", 00:26:04.018 "base_bdev": "nvme0n1", 00:26:04.018 "thin_provision": true, 00:26:04.018 "num_allocated_clusters": 0, 00:26:04.018 "snapshot": false, 00:26:04.018 "clone": false, 00:26:04.018 "esnap_clone": false 00:26:04.018 } 00:26:04.018 } 00:26:04.018 } 00:26:04.018 ]' 00:26:04.018 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:04.018 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:04.018 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:04.018 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:04.018 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:04.018 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:04.018 12:52:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:26:04.018 12:52:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:04.018 12:52:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:04.279 12:52:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:04.279 12:52:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:04.279 12:52:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 6c393982-53de-41b6-bbb8-81421fc7fb29 00:26:04.279 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=6c393982-53de-41b6-bbb8-81421fc7fb29 00:26:04.279 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:04.279 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:04.279 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:04.279 12:52:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6c393982-53de-41b6-bbb8-81421fc7fb29 00:26:04.540 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:04.540 { 00:26:04.540 "name": "6c393982-53de-41b6-bbb8-81421fc7fb29", 00:26:04.540 "aliases": [ 00:26:04.540 "lvs/nvme0n1p0" 00:26:04.540 ], 00:26:04.540 "product_name": "Logical Volume", 00:26:04.540 "block_size": 4096, 00:26:04.540 "num_blocks": 26476544, 00:26:04.540 "uuid": "6c393982-53de-41b6-bbb8-81421fc7fb29", 00:26:04.540 "assigned_rate_limits": { 00:26:04.540 "rw_ios_per_sec": 0, 00:26:04.540 "rw_mbytes_per_sec": 0, 00:26:04.540 "r_mbytes_per_sec": 0, 00:26:04.540 "w_mbytes_per_sec": 0 00:26:04.540 }, 00:26:04.540 "claimed": false, 00:26:04.540 "zoned": false, 00:26:04.540 "supported_io_types": { 00:26:04.540 "read": true, 00:26:04.540 "write": true, 00:26:04.540 "unmap": true, 00:26:04.540 "flush": false, 00:26:04.540 "reset": true, 00:26:04.540 "nvme_admin": false, 00:26:04.540 "nvme_io": false, 00:26:04.540 "nvme_io_md": false, 00:26:04.540 "write_zeroes": true, 00:26:04.540 "zcopy": false, 00:26:04.540 "get_zone_info": false, 00:26:04.540 "zone_management": false, 00:26:04.540 "zone_append": false, 00:26:04.540 "compare": false, 00:26:04.540 "compare_and_write": false, 00:26:04.540 "abort": false, 00:26:04.540 "seek_hole": true, 00:26:04.540 "seek_data": true, 00:26:04.540 "copy": false, 00:26:04.540 "nvme_iov_md": false 00:26:04.540 }, 00:26:04.540 "driver_specific": { 00:26:04.540 "lvol": { 00:26:04.540 "lvol_store_uuid": "2a6c18bc-2997-42a2-a6e9-1edd0fd53d24", 00:26:04.540 "base_bdev": "nvme0n1", 00:26:04.540 "thin_provision": true, 00:26:04.540 "num_allocated_clusters": 0, 00:26:04.540 "snapshot": false, 00:26:04.540 "clone": false, 00:26:04.540 "esnap_clone": false 00:26:04.540 } 00:26:04.540 } 00:26:04.540 } 00:26:04.540 ]' 00:26:04.540 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:04.540 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:04.540 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:04.540 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:04.540 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:04.540 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:04.540 12:52:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:04.540 12:52:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:04.802 12:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:04.802 12:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 6c393982-53de-41b6-bbb8-81421fc7fb29 00:26:04.802 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=6c393982-53de-41b6-bbb8-81421fc7fb29 00:26:04.802 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:04.802 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:04.802 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:04.802 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6c393982-53de-41b6-bbb8-81421fc7fb29 00:26:04.802 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:04.802 { 00:26:04.802 "name": "6c393982-53de-41b6-bbb8-81421fc7fb29", 00:26:04.802 "aliases": [ 00:26:04.802 "lvs/nvme0n1p0" 00:26:04.802 ], 00:26:04.802 "product_name": "Logical Volume", 00:26:04.802 "block_size": 4096, 00:26:04.802 "num_blocks": 26476544, 00:26:04.802 "uuid": "6c393982-53de-41b6-bbb8-81421fc7fb29", 00:26:04.802 "assigned_rate_limits": { 00:26:04.802 "rw_ios_per_sec": 0, 00:26:04.802 "rw_mbytes_per_sec": 0, 00:26:04.802 "r_mbytes_per_sec": 0, 00:26:04.802 "w_mbytes_per_sec": 0 00:26:04.802 }, 00:26:04.802 "claimed": false, 00:26:04.802 "zoned": false, 00:26:04.802 "supported_io_types": { 00:26:04.802 "read": true, 00:26:04.802 "write": true, 00:26:04.802 "unmap": true, 00:26:04.802 "flush": false, 00:26:04.802 "reset": true, 00:26:04.802 "nvme_admin": false, 00:26:04.802 "nvme_io": false, 00:26:04.802 "nvme_io_md": false, 00:26:04.802 "write_zeroes": true, 00:26:04.802 "zcopy": false, 00:26:04.802 "get_zone_info": false, 00:26:04.802 "zone_management": false, 00:26:04.802 "zone_append": false, 00:26:04.802 "compare": false, 00:26:04.802 "compare_and_write": false, 00:26:04.802 "abort": false, 00:26:04.802 "seek_hole": true, 00:26:04.802 "seek_data": true, 00:26:04.802 "copy": false, 00:26:04.802 "nvme_iov_md": false 00:26:04.802 }, 00:26:04.802 "driver_specific": { 00:26:04.802 "lvol": { 00:26:04.802 "lvol_store_uuid": "2a6c18bc-2997-42a2-a6e9-1edd0fd53d24", 00:26:04.802 "base_bdev": "nvme0n1", 00:26:04.802 "thin_provision": true, 00:26:04.802 "num_allocated_clusters": 0, 00:26:04.802 "snapshot": false, 00:26:04.802 "clone": false, 00:26:04.802 "esnap_clone": false 00:26:04.802 } 00:26:04.802 } 00:26:04.802 } 00:26:04.802 ]' 00:26:04.802 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:05.064 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:05.064 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:05.064 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:05.064 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:05.064 12:52:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:05.064 12:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:05.064 12:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6c393982-53de-41b6-bbb8-81421fc7fb29 --l2p_dram_limit 10' 00:26:05.064 12:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:05.064 12:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:05.064 12:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:05.064 12:52:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6c393982-53de-41b6-bbb8-81421fc7fb29 --l2p_dram_limit 10 -c nvc0n1p0 00:26:05.064 [2024-12-14 12:52:04.773766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.064 [2024-12-14 12:52:04.773802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:05.064 [2024-12-14 12:52:04.773814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:05.064 [2024-12-14 12:52:04.773821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.064 [2024-12-14 12:52:04.773865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.064 [2024-12-14 12:52:04.773873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:05.064 [2024-12-14 12:52:04.773881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:05.064 [2024-12-14 12:52:04.773887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.064 [2024-12-14 12:52:04.773907] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:05.064 [2024-12-14 12:52:04.774610] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:05.064 [2024-12-14 12:52:04.774635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.064 [2024-12-14 12:52:04.774642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:05.064 [2024-12-14 12:52:04.774650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:26:05.064 [2024-12-14 12:52:04.774656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.064 [2024-12-14 12:52:04.774680] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1f69b931-ed0c-40a1-9030-509521ad0f68 00:26:05.064 [2024-12-14 12:52:04.775602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.064 [2024-12-14 12:52:04.775628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:05.064 [2024-12-14 12:52:04.775636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:05.064 [2024-12-14 12:52:04.775643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.064 [2024-12-14 12:52:04.780273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.064 [2024-12-14 12:52:04.780303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:05.064 [2024-12-14 12:52:04.780310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.575 ms 00:26:05.064 [2024-12-14 12:52:04.780318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.065 [2024-12-14 12:52:04.780382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.065 [2024-12-14 12:52:04.780391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:05.065 [2024-12-14 12:52:04.780397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:05.065 [2024-12-14 12:52:04.780406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.065 [2024-12-14 12:52:04.780445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.065 [2024-12-14 12:52:04.780454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:05.065 [2024-12-14 12:52:04.780461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:05.065 [2024-12-14 12:52:04.780469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.065 [2024-12-14 12:52:04.780484] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:05.065 [2024-12-14 12:52:04.783355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.065 [2024-12-14 12:52:04.783379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:05.065 [2024-12-14 12:52:04.783389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.873 ms 00:26:05.065 [2024-12-14 12:52:04.783395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.065 [2024-12-14 12:52:04.783422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.065 [2024-12-14 12:52:04.783429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:05.065 [2024-12-14 12:52:04.783436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:05.065 [2024-12-14 12:52:04.783441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.065 [2024-12-14 12:52:04.783456] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:05.065 [2024-12-14 12:52:04.783563] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:05.065 [2024-12-14 12:52:04.783574] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:05.065 [2024-12-14 12:52:04.783582] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:05.065 [2024-12-14 12:52:04.783592] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:05.065 [2024-12-14 12:52:04.783598] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:05.065 [2024-12-14 12:52:04.783606] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:05.065 [2024-12-14 12:52:04.783611] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:05.065 [2024-12-14 12:52:04.783620] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:05.065 [2024-12-14 12:52:04.783625] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:05.065 [2024-12-14 12:52:04.783632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.065 [2024-12-14 12:52:04.783642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:05.065 [2024-12-14 12:52:04.783649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:26:05.065 [2024-12-14 12:52:04.783654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.065 [2024-12-14 12:52:04.783720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.065 [2024-12-14 12:52:04.783726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:05.065 [2024-12-14 12:52:04.783733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:05.065 [2024-12-14 12:52:04.783738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.065 [2024-12-14 12:52:04.783814] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:05.065 [2024-12-14 12:52:04.783821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:05.065 [2024-12-14 12:52:04.783828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:05.065 [2024-12-14 12:52:04.783834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.065 [2024-12-14 12:52:04.783841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:05.065 [2024-12-14 12:52:04.783846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:05.065 [2024-12-14 12:52:04.783852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:05.065 [2024-12-14 12:52:04.783857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:05.065 [2024-12-14 12:52:04.783864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:05.065 [2024-12-14 12:52:04.783868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:05.065 [2024-12-14 12:52:04.783875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:05.065 [2024-12-14 12:52:04.783880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:05.065 [2024-12-14 12:52:04.783888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:05.065 [2024-12-14 12:52:04.783893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:05.065 [2024-12-14 12:52:04.783899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:05.065 [2024-12-14 12:52:04.783904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.065 [2024-12-14 12:52:04.783912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:05.065 [2024-12-14 12:52:04.783917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:05.065 [2024-12-14 12:52:04.783924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.065 [2024-12-14 12:52:04.783928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:05.065 [2024-12-14 12:52:04.783935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:05.065 [2024-12-14 12:52:04.783939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.065 [2024-12-14 12:52:04.783946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:05.065 [2024-12-14 12:52:04.783951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:05.065 [2024-12-14 12:52:04.783956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.065 [2024-12-14 12:52:04.783961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:05.065 [2024-12-14 12:52:04.783967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:05.065 [2024-12-14 12:52:04.783971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.065 [2024-12-14 12:52:04.783978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:05.065 [2024-12-14 12:52:04.783983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:05.065 [2024-12-14 12:52:04.783988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:05.065 [2024-12-14 12:52:04.783993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:05.065 [2024-12-14 12:52:04.784001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:05.065 [2024-12-14 12:52:04.784006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:05.065 [2024-12-14 12:52:04.784013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:05.065 [2024-12-14 12:52:04.784018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:05.065 [2024-12-14 12:52:04.784024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:05.065 [2024-12-14 12:52:04.784029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:05.065 [2024-12-14 12:52:04.784036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:05.065 [2024-12-14 12:52:04.784041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.065 [2024-12-14 12:52:04.784047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:05.065 [2024-12-14 12:52:04.784052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:05.065 [2024-12-14 12:52:04.784067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.065 [2024-12-14 12:52:04.784072] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:05.065 [2024-12-14 12:52:04.784079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:05.065 [2024-12-14 12:52:04.784085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:05.065 [2024-12-14 12:52:04.784091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:05.065 [2024-12-14 12:52:04.784097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:05.065 [2024-12-14 12:52:04.784105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:05.065 [2024-12-14 12:52:04.784110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:05.065 [2024-12-14 12:52:04.784117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:05.065 [2024-12-14 12:52:04.784122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:05.065 [2024-12-14 12:52:04.784128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:05.065 [2024-12-14 12:52:04.784134] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:05.065 [2024-12-14 12:52:04.784143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.065 [2024-12-14 12:52:04.784151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:05.065 [2024-12-14 12:52:04.784158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:05.065 [2024-12-14 12:52:04.784163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:05.065 [2024-12-14 12:52:04.784170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:05.065 [2024-12-14 12:52:04.784176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:05.065 [2024-12-14 12:52:04.784183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:05.065 [2024-12-14 12:52:04.784188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:05.065 [2024-12-14 12:52:04.784196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:05.065 [2024-12-14 12:52:04.784201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:05.065 [2024-12-14 12:52:04.784210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:05.065 [2024-12-14 12:52:04.784221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:05.065 [2024-12-14 12:52:04.784229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:05.065 [2024-12-14 12:52:04.784235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:05.066 [2024-12-14 12:52:04.784242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:05.066 [2024-12-14 12:52:04.784247] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:05.066 [2024-12-14 12:52:04.784254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:05.066 [2024-12-14 12:52:04.784261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:05.066 [2024-12-14 12:52:04.784267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:05.066 [2024-12-14 12:52:04.784273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:05.066 [2024-12-14 12:52:04.784279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:05.066 [2024-12-14 12:52:04.784284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:05.066 [2024-12-14 12:52:04.784291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:05.066 [2024-12-14 12:52:04.784297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:26:05.066 [2024-12-14 12:52:04.784304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:05.066 [2024-12-14 12:52:04.784332] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:05.066 [2024-12-14 12:52:04.784342] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:08.368 [2024-12-14 12:52:07.905456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.368 [2024-12-14 12:52:07.905553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:08.368 [2024-12-14 12:52:07.905572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3121.109 ms 00:26:08.368 [2024-12-14 12:52:07.905585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.368 [2024-12-14 12:52:07.937255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.368 [2024-12-14 12:52:07.937320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:08.368 [2024-12-14 12:52:07.937334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.423 ms 00:26:08.368 [2024-12-14 12:52:07.937345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.368 [2024-12-14 12:52:07.937508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.368 [2024-12-14 12:52:07.937524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:08.368 [2024-12-14 12:52:07.937535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:26:08.368 [2024-12-14 12:52:07.937553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.368 [2024-12-14 12:52:07.972509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.368 [2024-12-14 12:52:07.972808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:08.368 [2024-12-14 12:52:07.972829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.918 ms 00:26:08.368 [2024-12-14 12:52:07.972840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.368 [2024-12-14 12:52:07.972876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.368 [2024-12-14 12:52:07.972893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:08.368 [2024-12-14 12:52:07.972903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:08.369 [2024-12-14 12:52:07.972921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.369 [2024-12-14 12:52:07.973532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.369 [2024-12-14 12:52:07.973561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:08.369 [2024-12-14 12:52:07.973573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:26:08.369 [2024-12-14 12:52:07.973584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.369 [2024-12-14 12:52:07.973696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.369 [2024-12-14 12:52:07.973710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:08.369 [2024-12-14 12:52:07.973721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:26:08.369 [2024-12-14 12:52:07.973734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.369 [2024-12-14 12:52:07.990988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.369 [2024-12-14 12:52:07.991208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:08.369 [2024-12-14 12:52:07.991227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.236 ms 00:26:08.369 [2024-12-14 12:52:07.991239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.369 [2024-12-14 12:52:08.018979] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:08.369 [2024-12-14 12:52:08.023307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.369 [2024-12-14 12:52:08.023352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:08.369 [2024-12-14 12:52:08.023368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.960 ms 00:26:08.369 [2024-12-14 12:52:08.023377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.631 [2024-12-14 12:52:08.109939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.631 [2024-12-14 12:52:08.110000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:08.631 [2024-12-14 12:52:08.110018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.511 ms 00:26:08.631 [2024-12-14 12:52:08.110028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.631 [2024-12-14 12:52:08.110259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.631 [2024-12-14 12:52:08.110278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:08.631 [2024-12-14 12:52:08.110293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:26:08.631 [2024-12-14 12:52:08.110302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.631 [2024-12-14 12:52:08.135983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.631 [2024-12-14 12:52:08.136030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:08.631 [2024-12-14 12:52:08.136046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.623 ms 00:26:08.631 [2024-12-14 12:52:08.136075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.631 [2024-12-14 12:52:08.160724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.631 [2024-12-14 12:52:08.160768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:08.631 [2024-12-14 12:52:08.160783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.590 ms 00:26:08.631 [2024-12-14 12:52:08.160791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.631 [2024-12-14 12:52:08.161440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.631 [2024-12-14 12:52:08.161475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:08.631 [2024-12-14 12:52:08.161487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:26:08.631 [2024-12-14 12:52:08.161498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.631 [2024-12-14 12:52:08.242695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.631 [2024-12-14 12:52:08.242746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:08.631 [2024-12-14 12:52:08.242765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.150 ms 00:26:08.631 [2024-12-14 12:52:08.242773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.631 [2024-12-14 12:52:08.270095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.631 [2024-12-14 12:52:08.270363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:08.631 [2024-12-14 12:52:08.270391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.227 ms 00:26:08.631 [2024-12-14 12:52:08.270400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.631 [2024-12-14 12:52:08.296147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.631 [2024-12-14 12:52:08.296192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:08.631 [2024-12-14 12:52:08.296207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.612 ms 00:26:08.631 [2024-12-14 12:52:08.296215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.631 [2024-12-14 12:52:08.322686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.631 [2024-12-14 12:52:08.322731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:08.631 [2024-12-14 12:52:08.322746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.420 ms 00:26:08.631 [2024-12-14 12:52:08.322755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.631 [2024-12-14 12:52:08.322807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.631 [2024-12-14 12:52:08.322818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:08.631 [2024-12-14 12:52:08.322832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:08.631 [2024-12-14 12:52:08.322841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.631 [2024-12-14 12:52:08.322939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:08.631 [2024-12-14 12:52:08.322953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:08.631 [2024-12-14 12:52:08.322965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:08.631 [2024-12-14 12:52:08.322973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:08.631 [2024-12-14 12:52:08.324143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3549.865 ms, result 0 00:26:08.631 { 00:26:08.631 "name": "ftl0", 00:26:08.631 "uuid": "1f69b931-ed0c-40a1-9030-509521ad0f68" 00:26:08.631 } 00:26:08.631 12:52:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:08.631 12:52:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:08.893 12:52:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:08.893 12:52:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:08.893 12:52:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:09.154 /dev/nbd0 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:09.154 1+0 records in 00:26:09.154 1+0 records out 00:26:09.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536025 s, 7.6 MB/s 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:09.154 12:52:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:09.415 [2024-12-14 12:52:08.928758] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:09.415 [2024-12-14 12:52:08.928917] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82222 ] 00:26:09.415 [2024-12-14 12:52:09.096453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.677 [2024-12-14 12:52:09.243117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.068  [2024-12-14T12:52:11.743Z] Copying: 188/1024 [MB] (188 MBps) [2024-12-14T12:52:12.679Z] Copying: 403/1024 [MB] (214 MBps) [2024-12-14T12:52:13.614Z] Copying: 661/1024 [MB] (258 MBps) [2024-12-14T12:52:14.180Z] Copying: 917/1024 [MB] (255 MBps) [2024-12-14T12:52:14.749Z] Copying: 1024/1024 [MB] (average 231 MBps) 00:26:15.012 00:26:15.012 12:52:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:17.558 12:52:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:17.558 [2024-12-14 12:52:16.818023] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:17.558 [2024-12-14 12:52:16.818275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82300 ] 00:26:17.558 [2024-12-14 12:52:16.976590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.558 [2024-12-14 12:52:17.085270] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.945  [2024-12-14T12:52:19.616Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-14T12:52:20.550Z] Copying: 36/1024 [MB] (25 MBps) [2024-12-14T12:52:21.539Z] Copying: 71/1024 [MB] (34 MBps) [2024-12-14T12:52:22.489Z] Copying: 86/1024 [MB] (14 MBps) [2024-12-14T12:52:23.424Z] Copying: 108/1024 [MB] (22 MBps) [2024-12-14T12:52:24.358Z] Copying: 132/1024 [MB] (23 MBps) [2024-12-14T12:52:25.731Z] Copying: 143/1024 [MB] (11 MBps) [2024-12-14T12:52:26.664Z] Copying: 170/1024 [MB] (26 MBps) [2024-12-14T12:52:27.598Z] Copying: 189/1024 [MB] (19 MBps) [2024-12-14T12:52:28.532Z] Copying: 224/1024 [MB] (35 MBps) [2024-12-14T12:52:29.466Z] Copying: 258/1024 [MB] (34 MBps) [2024-12-14T12:52:30.400Z] Copying: 281/1024 [MB] (22 MBps) [2024-12-14T12:52:31.334Z] Copying: 296/1024 [MB] (14 MBps) [2024-12-14T12:52:32.709Z] Copying: 310/1024 [MB] (14 MBps) [2024-12-14T12:52:33.644Z] Copying: 327/1024 [MB] (16 MBps) [2024-12-14T12:52:34.578Z] Copying: 344/1024 [MB] (17 MBps) [2024-12-14T12:52:35.513Z] Copying: 359/1024 [MB] (14 MBps) [2024-12-14T12:52:36.447Z] Copying: 370/1024 [MB] (11 MBps) [2024-12-14T12:52:37.382Z] Copying: 387/1024 [MB] (17 MBps) [2024-12-14T12:52:38.756Z] Copying: 404/1024 [MB] (16 MBps) [2024-12-14T12:52:39.691Z] Copying: 424/1024 [MB] (20 MBps) [2024-12-14T12:52:40.626Z] Copying: 444/1024 [MB] (19 MBps) [2024-12-14T12:52:41.561Z] Copying: 462/1024 [MB] (17 MBps) [2024-12-14T12:52:42.496Z] Copying: 478/1024 [MB] (15 MBps) [2024-12-14T12:52:43.431Z] Copying: 494/1024 [MB] (16 MBps) [2024-12-14T12:52:44.365Z] Copying: 511/1024 [MB] (16 MBps) [2024-12-14T12:52:45.739Z] Copying: 527/1024 [MB] (16 MBps) [2024-12-14T12:52:46.673Z] Copying: 544/1024 [MB] (16 MBps) [2024-12-14T12:52:47.623Z] Copying: 560/1024 [MB] (15 MBps) [2024-12-14T12:52:48.572Z] Copying: 572/1024 [MB] (12 MBps) [2024-12-14T12:52:49.506Z] Copying: 587/1024 [MB] (14 MBps) [2024-12-14T12:52:50.440Z] Copying: 605/1024 [MB] (17 MBps) [2024-12-14T12:52:51.374Z] Copying: 621/1024 [MB] (16 MBps) [2024-12-14T12:52:52.748Z] Copying: 638/1024 [MB] (17 MBps) [2024-12-14T12:52:53.682Z] Copying: 653/1024 [MB] (15 MBps) [2024-12-14T12:52:54.614Z] Copying: 667/1024 [MB] (14 MBps) [2024-12-14T12:52:55.548Z] Copying: 680/1024 [MB] (12 MBps) [2024-12-14T12:52:56.482Z] Copying: 691/1024 [MB] (11 MBps) [2024-12-14T12:52:57.416Z] Copying: 702/1024 [MB] (10 MBps) [2024-12-14T12:52:58.350Z] Copying: 715/1024 [MB] (13 MBps) [2024-12-14T12:52:59.724Z] Copying: 731/1024 [MB] (16 MBps) [2024-12-14T12:53:00.658Z] Copying: 745/1024 [MB] (13 MBps) [2024-12-14T12:53:01.591Z] Copying: 758/1024 [MB] (12 MBps) [2024-12-14T12:53:02.525Z] Copying: 775/1024 [MB] (17 MBps) [2024-12-14T12:53:03.459Z] Copying: 788/1024 [MB] (13 MBps) [2024-12-14T12:53:04.392Z] Copying: 800/1024 [MB] (11 MBps) [2024-12-14T12:53:05.325Z] Copying: 813/1024 [MB] (13 MBps) [2024-12-14T12:53:06.699Z] Copying: 827/1024 [MB] (14 MBps) [2024-12-14T12:53:07.634Z] Copying: 844/1024 [MB] (16 MBps) [2024-12-14T12:53:08.568Z] Copying: 869/1024 [MB] (24 MBps) [2024-12-14T12:53:09.502Z] Copying: 903/1024 [MB] (34 MBps) [2024-12-14T12:53:10.435Z] Copying: 926/1024 [MB] (22 MBps) [2024-12-14T12:53:11.369Z] Copying: 941/1024 [MB] (15 MBps) [2024-12-14T12:53:12.741Z] Copying: 960/1024 [MB] (18 MBps) [2024-12-14T12:53:13.359Z] Copying: 994/1024 [MB] (34 MBps) [2024-12-14T12:53:13.969Z] Copying: 1014/1024 [MB] (19 MBps) [2024-12-14T12:53:14.905Z] Copying: 1024/1024 [MB] (average 18 MBps) 00:27:15.168 00:27:15.168 12:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:15.168 12:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:15.168 12:53:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:15.430 [2024-12-14 12:53:14.983351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.430 [2024-12-14 12:53:14.983391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:15.430 [2024-12-14 12:53:14.983402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:15.430 [2024-12-14 12:53:14.983409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.430 [2024-12-14 12:53:14.983429] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:15.430 [2024-12-14 12:53:14.985571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.430 [2024-12-14 12:53:14.985597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:15.430 [2024-12-14 12:53:14.985607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.128 ms 00:27:15.430 [2024-12-14 12:53:14.985614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.430 [2024-12-14 12:53:14.987316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.430 [2024-12-14 12:53:14.987422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:15.430 [2024-12-14 12:53:14.987440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.681 ms 00:27:15.430 [2024-12-14 12:53:14.987447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.430 [2024-12-14 12:53:15.001989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.430 [2024-12-14 12:53:15.002018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:15.430 [2024-12-14 12:53:15.002028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.523 ms 00:27:15.430 [2024-12-14 12:53:15.002034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.430 [2024-12-14 12:53:15.006824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.430 [2024-12-14 12:53:15.006847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:15.430 [2024-12-14 12:53:15.006856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.752 ms 00:27:15.430 [2024-12-14 12:53:15.006863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.430 [2024-12-14 12:53:15.024999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.430 [2024-12-14 12:53:15.025020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:15.430 [2024-12-14 12:53:15.025030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.080 ms 00:27:15.430 [2024-12-14 12:53:15.025036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.430 [2024-12-14 12:53:15.037360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.430 [2024-12-14 12:53:15.037386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:15.430 [2024-12-14 12:53:15.037398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.282 ms 00:27:15.430 [2024-12-14 12:53:15.037405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.430 [2024-12-14 12:53:15.037524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.430 [2024-12-14 12:53:15.037532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:15.431 [2024-12-14 12:53:15.037540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:27:15.431 [2024-12-14 12:53:15.037546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.431 [2024-12-14 12:53:15.055354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.431 [2024-12-14 12:53:15.055378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:15.431 [2024-12-14 12:53:15.055387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.792 ms 00:27:15.431 [2024-12-14 12:53:15.055393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.431 [2024-12-14 12:53:15.072604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.431 [2024-12-14 12:53:15.072711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:15.431 [2024-12-14 12:53:15.072726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.182 ms 00:27:15.431 [2024-12-14 12:53:15.072731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.431 [2024-12-14 12:53:15.089896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.431 [2024-12-14 12:53:15.089921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:15.431 [2024-12-14 12:53:15.089930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.137 ms 00:27:15.431 [2024-12-14 12:53:15.089936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.431 [2024-12-14 12:53:15.106733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.431 [2024-12-14 12:53:15.106757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:15.431 [2024-12-14 12:53:15.106766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.740 ms 00:27:15.431 [2024-12-14 12:53:15.106772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.431 [2024-12-14 12:53:15.106799] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:15.431 [2024-12-14 12:53:15.106810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.106998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:15.431 [2024-12-14 12:53:15.107339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:15.432 [2024-12-14 12:53:15.107508] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:15.432 [2024-12-14 12:53:15.107515] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f69b931-ed0c-40a1-9030-509521ad0f68 00:27:15.432 [2024-12-14 12:53:15.107521] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:15.432 [2024-12-14 12:53:15.107528] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:15.432 [2024-12-14 12:53:15.107534] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:15.432 [2024-12-14 12:53:15.107542] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:15.432 [2024-12-14 12:53:15.107548] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:15.432 [2024-12-14 12:53:15.107554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:15.432 [2024-12-14 12:53:15.107560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:15.432 [2024-12-14 12:53:15.107567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:15.432 [2024-12-14 12:53:15.107571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:15.432 [2024-12-14 12:53:15.107578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.432 [2024-12-14 12:53:15.107583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:15.432 [2024-12-14 12:53:15.107591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:27:15.432 [2024-12-14 12:53:15.107596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.432 [2024-12-14 12:53:15.117300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.432 [2024-12-14 12:53:15.117324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:15.432 [2024-12-14 12:53:15.117333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.678 ms 00:27:15.432 [2024-12-14 12:53:15.117339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.432 [2024-12-14 12:53:15.117622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.432 [2024-12-14 12:53:15.117632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:15.432 [2024-12-14 12:53:15.117642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:27:15.432 [2024-12-14 12:53:15.117647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.432 [2024-12-14 12:53:15.150932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.432 [2024-12-14 12:53:15.150991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:15.432 [2024-12-14 12:53:15.151004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.432 [2024-12-14 12:53:15.151011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.432 [2024-12-14 12:53:15.151072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.432 [2024-12-14 12:53:15.151079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:15.432 [2024-12-14 12:53:15.151087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.432 [2024-12-14 12:53:15.151093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.432 [2024-12-14 12:53:15.151146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.432 [2024-12-14 12:53:15.151155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:15.432 [2024-12-14 12:53:15.151162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.432 [2024-12-14 12:53:15.151168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.432 [2024-12-14 12:53:15.151183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.432 [2024-12-14 12:53:15.151190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:15.432 [2024-12-14 12:53:15.151197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.432 [2024-12-14 12:53:15.151202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.693 [2024-12-14 12:53:15.210661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.693 [2024-12-14 12:53:15.210696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:15.693 [2024-12-14 12:53:15.210706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.693 [2024-12-14 12:53:15.210712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.693 [2024-12-14 12:53:15.259362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.693 [2024-12-14 12:53:15.259393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:15.693 [2024-12-14 12:53:15.259403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.693 [2024-12-14 12:53:15.259410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.693 [2024-12-14 12:53:15.259472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.693 [2024-12-14 12:53:15.259479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:15.693 [2024-12-14 12:53:15.259489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.693 [2024-12-14 12:53:15.259494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.693 [2024-12-14 12:53:15.259543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.693 [2024-12-14 12:53:15.259551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:15.693 [2024-12-14 12:53:15.259558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.693 [2024-12-14 12:53:15.259564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.693 [2024-12-14 12:53:15.259633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.693 [2024-12-14 12:53:15.259641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:15.693 [2024-12-14 12:53:15.259649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.693 [2024-12-14 12:53:15.259656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.693 [2024-12-14 12:53:15.259682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.693 [2024-12-14 12:53:15.259689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:15.693 [2024-12-14 12:53:15.259696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.693 [2024-12-14 12:53:15.259701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.693 [2024-12-14 12:53:15.259731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.693 [2024-12-14 12:53:15.259737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:15.693 [2024-12-14 12:53:15.259744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.693 [2024-12-14 12:53:15.259751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.693 [2024-12-14 12:53:15.259788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:15.693 [2024-12-14 12:53:15.259795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:15.693 [2024-12-14 12:53:15.259803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:15.693 [2024-12-14 12:53:15.259808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.693 [2024-12-14 12:53:15.259911] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.528 ms, result 0 00:27:15.693 true 00:27:15.693 12:53:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 82080 00:27:15.693 12:53:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid82080 00:27:15.693 12:53:15 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:15.693 [2024-12-14 12:53:15.334460] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:27:15.693 [2024-12-14 12:53:15.334578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82913 ] 00:27:15.953 [2024-12-14 12:53:15.493258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.953 [2024-12-14 12:53:15.625632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.340  [2024-12-14T12:53:18.013Z] Copying: 183/1024 [MB] (183 MBps) [2024-12-14T12:53:18.948Z] Copying: 408/1024 [MB] (225 MBps) [2024-12-14T12:53:20.323Z] Copying: 662/1024 [MB] (253 MBps) [2024-12-14T12:53:20.582Z] Copying: 908/1024 [MB] (246 MBps) [2024-12-14T12:53:21.151Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:27:21.414 00:27:21.414 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 82080 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:21.414 12:53:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:21.415 [2024-12-14 12:53:21.074717] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:27:21.415 [2024-12-14 12:53:21.074837] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82968 ] 00:27:21.674 [2024-12-14 12:53:21.230603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.674 [2024-12-14 12:53:21.320362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.934 [2024-12-14 12:53:21.553717] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:21.934 [2024-12-14 12:53:21.553915] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:21.934 [2024-12-14 12:53:21.620435] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:21.934 [2024-12-14 12:53:21.621257] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:21.934 [2024-12-14 12:53:21.621756] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:22.508 [2024-12-14 12:53:21.955094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.508 [2024-12-14 12:53:21.955331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:22.508 [2024-12-14 12:53:21.955357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:22.508 [2024-12-14 12:53:21.955373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.508 [2024-12-14 12:53:21.955446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.508 [2024-12-14 12:53:21.955458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:22.508 [2024-12-14 12:53:21.955467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:27:22.508 [2024-12-14 12:53:21.955475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.508 [2024-12-14 12:53:21.955497] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:22.508 [2024-12-14 12:53:21.956266] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:22.508 [2024-12-14 12:53:21.956288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.508 [2024-12-14 12:53:21.956296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:22.508 [2024-12-14 12:53:21.956306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:27:22.508 [2024-12-14 12:53:21.956315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.508 [2024-12-14 12:53:21.958167] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:22.508 [2024-12-14 12:53:21.972746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.508 [2024-12-14 12:53:21.972799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:22.508 [2024-12-14 12:53:21.972814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.583 ms 00:27:22.508 [2024-12-14 12:53:21.972822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.508 [2024-12-14 12:53:21.972923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.508 [2024-12-14 12:53:21.972933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:22.508 [2024-12-14 12:53:21.972943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:22.508 [2024-12-14 12:53:21.972951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.508 [2024-12-14 12:53:21.981507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.508 [2024-12-14 12:53:21.981551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:22.508 [2024-12-14 12:53:21.981562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.475 ms 00:27:22.508 [2024-12-14 12:53:21.981571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.508 [2024-12-14 12:53:21.981660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.508 [2024-12-14 12:53:21.981669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:22.508 [2024-12-14 12:53:21.981677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:22.508 [2024-12-14 12:53:21.981686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.508 [2024-12-14 12:53:21.981735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.508 [2024-12-14 12:53:21.981745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:22.508 [2024-12-14 12:53:21.981753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:22.508 [2024-12-14 12:53:21.981761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.508 [2024-12-14 12:53:21.981783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:22.508 [2024-12-14 12:53:21.985951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.508 [2024-12-14 12:53:21.985992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:22.508 [2024-12-14 12:53:21.986003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.173 ms 00:27:22.508 [2024-12-14 12:53:21.986011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.508 [2024-12-14 12:53:21.986051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.508 [2024-12-14 12:53:21.986080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:22.509 [2024-12-14 12:53:21.986089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:27:22.509 [2024-12-14 12:53:21.986096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.509 [2024-12-14 12:53:21.986155] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:22.509 [2024-12-14 12:53:21.986182] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:22.509 [2024-12-14 12:53:21.986220] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:22.509 [2024-12-14 12:53:21.986236] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:22.509 [2024-12-14 12:53:21.986342] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:22.509 [2024-12-14 12:53:21.986354] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:22.509 [2024-12-14 12:53:21.986365] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:22.509 [2024-12-14 12:53:21.986379] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:22.509 [2024-12-14 12:53:21.986389] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:22.509 [2024-12-14 12:53:21.986397] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:22.509 [2024-12-14 12:53:21.986405] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:22.509 [2024-12-14 12:53:21.986413] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:22.509 [2024-12-14 12:53:21.986422] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:22.509 [2024-12-14 12:53:21.986430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.509 [2024-12-14 12:53:21.986437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:22.509 [2024-12-14 12:53:21.986445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:27:22.509 [2024-12-14 12:53:21.986453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.509 [2024-12-14 12:53:21.986539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.509 [2024-12-14 12:53:21.986552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:22.509 [2024-12-14 12:53:21.986559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:22.509 [2024-12-14 12:53:21.986566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.509 [2024-12-14 12:53:21.986664] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:22.509 [2024-12-14 12:53:21.986674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:22.509 [2024-12-14 12:53:21.986682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:22.509 [2024-12-14 12:53:21.986691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.509 [2024-12-14 12:53:21.986698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:22.509 [2024-12-14 12:53:21.986705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:22.509 [2024-12-14 12:53:21.986712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:22.509 [2024-12-14 12:53:21.986719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:22.509 [2024-12-14 12:53:21.986726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:22.509 [2024-12-14 12:53:21.986742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:22.509 [2024-12-14 12:53:21.986751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:22.509 [2024-12-14 12:53:21.986758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:22.509 [2024-12-14 12:53:21.986767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:22.509 [2024-12-14 12:53:21.986774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:22.509 [2024-12-14 12:53:21.986781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:22.509 [2024-12-14 12:53:21.986788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.509 [2024-12-14 12:53:21.986797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:22.509 [2024-12-14 12:53:21.986804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:22.509 [2024-12-14 12:53:21.986811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.509 [2024-12-14 12:53:21.986818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:22.509 [2024-12-14 12:53:21.986825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:22.509 [2024-12-14 12:53:21.986831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.509 [2024-12-14 12:53:21.986838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:22.509 [2024-12-14 12:53:21.986845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:22.509 [2024-12-14 12:53:21.986851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.509 [2024-12-14 12:53:21.986858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:22.509 [2024-12-14 12:53:21.986865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:22.509 [2024-12-14 12:53:21.986872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.509 [2024-12-14 12:53:21.986879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:22.509 [2024-12-14 12:53:21.986886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:22.509 [2024-12-14 12:53:21.986892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:22.509 [2024-12-14 12:53:21.986899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:22.509 [2024-12-14 12:53:21.986905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:22.509 [2024-12-14 12:53:21.986911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:22.509 [2024-12-14 12:53:21.986918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:22.509 [2024-12-14 12:53:21.986924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:22.509 [2024-12-14 12:53:21.986930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:22.509 [2024-12-14 12:53:21.986937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:22.509 [2024-12-14 12:53:21.986944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:22.509 [2024-12-14 12:53:21.986950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.509 [2024-12-14 12:53:21.986957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:22.509 [2024-12-14 12:53:21.986963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:22.509 [2024-12-14 12:53:21.986969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.509 [2024-12-14 12:53:21.986975] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:22.509 [2024-12-14 12:53:21.986987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:22.509 [2024-12-14 12:53:21.986998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:22.509 [2024-12-14 12:53:21.987005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:22.509 [2024-12-14 12:53:21.987013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:22.509 [2024-12-14 12:53:21.987020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:22.509 [2024-12-14 12:53:21.987027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:22.509 [2024-12-14 12:53:21.987034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:22.509 [2024-12-14 12:53:21.987041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:22.509 [2024-12-14 12:53:21.987048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:22.509 [2024-12-14 12:53:21.987071] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:22.509 [2024-12-14 12:53:21.987081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:22.509 [2024-12-14 12:53:21.987089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:22.509 [2024-12-14 12:53:21.987096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:22.509 [2024-12-14 12:53:21.987103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:22.509 [2024-12-14 12:53:21.987111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:22.509 [2024-12-14 12:53:21.987119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:22.509 [2024-12-14 12:53:21.987126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:22.509 [2024-12-14 12:53:21.987133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:22.509 [2024-12-14 12:53:21.987141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:22.509 [2024-12-14 12:53:21.987149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:22.509 [2024-12-14 12:53:21.987156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:22.509 [2024-12-14 12:53:21.987164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:22.509 [2024-12-14 12:53:21.987171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:22.509 [2024-12-14 12:53:21.987178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:22.509 [2024-12-14 12:53:21.987186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:22.509 [2024-12-14 12:53:21.987193] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:22.509 [2024-12-14 12:53:21.987201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:22.509 [2024-12-14 12:53:21.987209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:22.509 [2024-12-14 12:53:21.987217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:22.509 [2024-12-14 12:53:21.987224] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:22.509 [2024-12-14 12:53:21.987231] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:22.509 [2024-12-14 12:53:21.987239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.509 [2024-12-14 12:53:21.987248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:22.510 [2024-12-14 12:53:21.987256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:27:22.510 [2024-12-14 12:53:21.987265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.020132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.020341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:22.510 [2024-12-14 12:53:22.020360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.817 ms 00:27:22.510 [2024-12-14 12:53:22.020370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.020467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.020476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:22.510 [2024-12-14 12:53:22.020486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:27:22.510 [2024-12-14 12:53:22.020493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.067957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.068013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:22.510 [2024-12-14 12:53:22.068031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.401 ms 00:27:22.510 [2024-12-14 12:53:22.068040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.068122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.068135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:22.510 [2024-12-14 12:53:22.068144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:22.510 [2024-12-14 12:53:22.068153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.068766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.068809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:22.510 [2024-12-14 12:53:22.068822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:27:22.510 [2024-12-14 12:53:22.068838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.068999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.069010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:22.510 [2024-12-14 12:53:22.069018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:27:22.510 [2024-12-14 12:53:22.069026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.085383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.085598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:22.510 [2024-12-14 12:53:22.085619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.337 ms 00:27:22.510 [2024-12-14 12:53:22.085628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.100379] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:22.510 [2024-12-14 12:53:22.100568] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:22.510 [2024-12-14 12:53:22.100589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.100598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:22.510 [2024-12-14 12:53:22.100608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.836 ms 00:27:22.510 [2024-12-14 12:53:22.100617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.126822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.126874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:22.510 [2024-12-14 12:53:22.126888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.158 ms 00:27:22.510 [2024-12-14 12:53:22.126896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.140178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.140229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:22.510 [2024-12-14 12:53:22.140242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.225 ms 00:27:22.510 [2024-12-14 12:53:22.140249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.152994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.153042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:22.510 [2024-12-14 12:53:22.153070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.695 ms 00:27:22.510 [2024-12-14 12:53:22.153079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.153771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.153809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:22.510 [2024-12-14 12:53:22.153821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:27:22.510 [2024-12-14 12:53:22.153829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.221604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.221833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:22.510 [2024-12-14 12:53:22.221860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.750 ms 00:27:22.510 [2024-12-14 12:53:22.221869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.233615] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:22.510 [2024-12-14 12:53:22.237154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.237199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:22.510 [2024-12-14 12:53:22.237211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.228 ms 00:27:22.510 [2024-12-14 12:53:22.237226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.237332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.237344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:22.510 [2024-12-14 12:53:22.237355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:22.510 [2024-12-14 12:53:22.237363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.237468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.237480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:22.510 [2024-12-14 12:53:22.237489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:22.510 [2024-12-14 12:53:22.237497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.237524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.237533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:22.510 [2024-12-14 12:53:22.237542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:22.510 [2024-12-14 12:53:22.237551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.510 [2024-12-14 12:53:22.237591] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:22.510 [2024-12-14 12:53:22.237602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.510 [2024-12-14 12:53:22.237611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:22.510 [2024-12-14 12:53:22.237620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:22.510 [2024-12-14 12:53:22.237632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.771 [2024-12-14 12:53:22.264392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.771 [2024-12-14 12:53:22.264466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:22.771 [2024-12-14 12:53:22.264480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.739 ms 00:27:22.771 [2024-12-14 12:53:22.264489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.771 [2024-12-14 12:53:22.264581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:22.771 [2024-12-14 12:53:22.264591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:22.771 [2024-12-14 12:53:22.264601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:22.771 [2024-12-14 12:53:22.264610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:22.771 [2024-12-14 12:53:22.266185] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 310.360 ms, result 0 00:27:23.715  [2024-12-14T12:53:24.394Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-14T12:53:25.341Z] Copying: 34/1024 [MB] (14 MBps) [2024-12-14T12:53:26.281Z] Copying: 49/1024 [MB] (14 MBps) [2024-12-14T12:53:27.671Z] Copying: 67/1024 [MB] (18 MBps) [2024-12-14T12:53:28.616Z] Copying: 82/1024 [MB] (15 MBps) [2024-12-14T12:53:29.560Z] Copying: 96/1024 [MB] (13 MBps) [2024-12-14T12:53:30.505Z] Copying: 113/1024 [MB] (16 MBps) [2024-12-14T12:53:31.449Z] Copying: 128/1024 [MB] (14 MBps) [2024-12-14T12:53:32.393Z] Copying: 144/1024 [MB] (16 MBps) [2024-12-14T12:53:33.336Z] Copying: 160/1024 [MB] (16 MBps) [2024-12-14T12:53:34.282Z] Copying: 180/1024 [MB] (19 MBps) [2024-12-14T12:53:35.670Z] Copying: 191/1024 [MB] (11 MBps) [2024-12-14T12:53:36.615Z] Copying: 202/1024 [MB] (10 MBps) [2024-12-14T12:53:37.560Z] Copying: 215/1024 [MB] (12 MBps) [2024-12-14T12:53:38.524Z] Copying: 234/1024 [MB] (19 MBps) [2024-12-14T12:53:39.476Z] Copying: 247/1024 [MB] (12 MBps) [2024-12-14T12:53:40.421Z] Copying: 286/1024 [MB] (39 MBps) [2024-12-14T12:53:41.364Z] Copying: 305/1024 [MB] (18 MBps) [2024-12-14T12:53:42.307Z] Copying: 327/1024 [MB] (22 MBps) [2024-12-14T12:53:43.693Z] Copying: 346/1024 [MB] (18 MBps) [2024-12-14T12:53:44.637Z] Copying: 363/1024 [MB] (17 MBps) [2024-12-14T12:53:45.578Z] Copying: 383/1024 [MB] (19 MBps) [2024-12-14T12:53:46.523Z] Copying: 400/1024 [MB] (17 MBps) [2024-12-14T12:53:47.466Z] Copying: 418/1024 [MB] (17 MBps) [2024-12-14T12:53:48.410Z] Copying: 436/1024 [MB] (18 MBps) [2024-12-14T12:53:49.355Z] Copying: 462/1024 [MB] (26 MBps) [2024-12-14T12:53:50.299Z] Copying: 486/1024 [MB] (23 MBps) [2024-12-14T12:53:51.686Z] Copying: 507/1024 [MB] (21 MBps) [2024-12-14T12:53:52.629Z] Copying: 525/1024 [MB] (17 MBps) [2024-12-14T12:53:53.573Z] Copying: 545/1024 [MB] (20 MBps) [2024-12-14T12:53:54.517Z] Copying: 563/1024 [MB] (18 MBps) [2024-12-14T12:53:55.460Z] Copying: 582/1024 [MB] (18 MBps) [2024-12-14T12:53:56.404Z] Copying: 605/1024 [MB] (23 MBps) [2024-12-14T12:53:57.348Z] Copying: 625/1024 [MB] (19 MBps) [2024-12-14T12:53:58.292Z] Copying: 638/1024 [MB] (13 MBps) [2024-12-14T12:53:59.681Z] Copying: 649/1024 [MB] (10 MBps) [2024-12-14T12:54:00.625Z] Copying: 659/1024 [MB] (10 MBps) [2024-12-14T12:54:01.569Z] Copying: 676/1024 [MB] (16 MBps) [2024-12-14T12:54:02.511Z] Copying: 688/1024 [MB] (12 MBps) [2024-12-14T12:54:03.454Z] Copying: 702/1024 [MB] (14 MBps) [2024-12-14T12:54:04.411Z] Copying: 715/1024 [MB] (12 MBps) [2024-12-14T12:54:05.430Z] Copying: 727/1024 [MB] (12 MBps) [2024-12-14T12:54:06.372Z] Copying: 737/1024 [MB] (10 MBps) [2024-12-14T12:54:07.315Z] Copying: 750/1024 [MB] (13 MBps) [2024-12-14T12:54:08.699Z] Copying: 797/1024 [MB] (46 MBps) [2024-12-14T12:54:09.642Z] Copying: 813/1024 [MB] (16 MBps) [2024-12-14T12:54:10.587Z] Copying: 833/1024 [MB] (19 MBps) [2024-12-14T12:54:11.529Z] Copying: 844/1024 [MB] (10 MBps) [2024-12-14T12:54:12.472Z] Copying: 854/1024 [MB] (10 MBps) [2024-12-14T12:54:13.415Z] Copying: 865/1024 [MB] (10 MBps) [2024-12-14T12:54:14.358Z] Copying: 883/1024 [MB] (18 MBps) [2024-12-14T12:54:15.307Z] Copying: 897/1024 [MB] (13 MBps) [2024-12-14T12:54:16.692Z] Copying: 912/1024 [MB] (15 MBps) [2024-12-14T12:54:17.635Z] Copying: 926/1024 [MB] (14 MBps) [2024-12-14T12:54:18.580Z] Copying: 943/1024 [MB] (17 MBps) [2024-12-14T12:54:19.523Z] Copying: 956/1024 [MB] (12 MBps) [2024-12-14T12:54:20.464Z] Copying: 968/1024 [MB] (11 MBps) [2024-12-14T12:54:21.405Z] Copying: 980/1024 [MB] (12 MBps) [2024-12-14T12:54:22.348Z] Copying: 1007/1024 [MB] (26 MBps) [2024-12-14T12:54:23.289Z] Copying: 1022/1024 [MB] (15 MBps) [2024-12-14T12:54:23.550Z] Copying: 1048556/1048576 [kB] (1664 kBps) [2024-12-14T12:54:23.550Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-14 12:54:23.315167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.814 [2024-12-14 12:54:23.315262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:23.814 [2024-12-14 12:54:23.315282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:23.814 [2024-12-14 12:54:23.315292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.814 [2024-12-14 12:54:23.318336] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:23.814 [2024-12-14 12:54:23.324399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.814 [2024-12-14 12:54:23.324608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:23.814 [2024-12-14 12:54:23.324635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.015 ms 00:28:23.814 [2024-12-14 12:54:23.324653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.814 [2024-12-14 12:54:23.336510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.814 [2024-12-14 12:54:23.336563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:23.814 [2024-12-14 12:54:23.336576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.809 ms 00:28:23.814 [2024-12-14 12:54:23.336586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.814 [2024-12-14 12:54:23.361717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.814 [2024-12-14 12:54:23.361770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:23.814 [2024-12-14 12:54:23.361782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.112 ms 00:28:23.814 [2024-12-14 12:54:23.361791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.814 [2024-12-14 12:54:23.368097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.814 [2024-12-14 12:54:23.368138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:23.814 [2024-12-14 12:54:23.368149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.257 ms 00:28:23.814 [2024-12-14 12:54:23.368157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.814 [2024-12-14 12:54:23.395237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.814 [2024-12-14 12:54:23.395440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:23.814 [2024-12-14 12:54:23.395462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.041 ms 00:28:23.814 [2024-12-14 12:54:23.395470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:23.814 [2024-12-14 12:54:23.411396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:23.814 [2024-12-14 12:54:23.411447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:23.814 [2024-12-14 12:54:23.411462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.847 ms 00:28:23.814 [2024-12-14 12:54:23.411471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.075 [2024-12-14 12:54:23.696582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.075 [2024-12-14 12:54:23.696635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:24.075 [2024-12-14 12:54:23.696656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 285.056 ms 00:28:24.075 [2024-12-14 12:54:23.696665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.075 [2024-12-14 12:54:23.723044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.075 [2024-12-14 12:54:23.723108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:24.075 [2024-12-14 12:54:23.723120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.362 ms 00:28:24.075 [2024-12-14 12:54:23.723142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.075 [2024-12-14 12:54:23.748884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.075 [2024-12-14 12:54:23.748931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:24.075 [2024-12-14 12:54:23.748943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.694 ms 00:28:24.075 [2024-12-14 12:54:23.748950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.075 [2024-12-14 12:54:23.773914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.075 [2024-12-14 12:54:23.773964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:24.076 [2024-12-14 12:54:23.773976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.915 ms 00:28:24.076 [2024-12-14 12:54:23.773983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.076 [2024-12-14 12:54:23.798582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.076 [2024-12-14 12:54:23.798629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:24.076 [2024-12-14 12:54:23.798641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.495 ms 00:28:24.076 [2024-12-14 12:54:23.798648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.076 [2024-12-14 12:54:23.798693] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:24.076 [2024-12-14 12:54:23.798709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 105728 / 261120 wr_cnt: 1 state: open 00:28:24.076 [2024-12-14 12:54:23.798720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.798998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:24.076 [2024-12-14 12:54:23.799433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:24.077 [2024-12-14 12:54:23.799561] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:24.077 [2024-12-14 12:54:23.799569] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f69b931-ed0c-40a1-9030-509521ad0f68 00:28:24.077 [2024-12-14 12:54:23.799591] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 105728 00:28:24.077 [2024-12-14 12:54:23.799599] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 106688 00:28:24.077 [2024-12-14 12:54:23.799607] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 105728 00:28:24.077 [2024-12-14 12:54:23.799616] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0091 00:28:24.077 [2024-12-14 12:54:23.799626] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:24.077 [2024-12-14 12:54:23.799635] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:24.077 [2024-12-14 12:54:23.799643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:24.077 [2024-12-14 12:54:23.799650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:24.077 [2024-12-14 12:54:23.799658] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:24.077 [2024-12-14 12:54:23.799666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.077 [2024-12-14 12:54:23.799674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:24.077 [2024-12-14 12:54:23.799683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:28:24.077 [2024-12-14 12:54:23.799691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:23.813029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.338 [2024-12-14 12:54:23.813093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:24.338 [2024-12-14 12:54:23.813106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.294 ms 00:28:24.338 [2024-12-14 12:54:23.813114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:23.813551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.338 [2024-12-14 12:54:23.813568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:24.338 [2024-12-14 12:54:23.813579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:28:24.338 [2024-12-14 12:54:23.813594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:23.850068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.338 [2024-12-14 12:54:23.850116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:24.338 [2024-12-14 12:54:23.850129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.338 [2024-12-14 12:54:23.850138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:23.850205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.338 [2024-12-14 12:54:23.850215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:24.338 [2024-12-14 12:54:23.850225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.338 [2024-12-14 12:54:23.850241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:23.850313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.338 [2024-12-14 12:54:23.850325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:24.338 [2024-12-14 12:54:23.850335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.338 [2024-12-14 12:54:23.850344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:23.850361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.338 [2024-12-14 12:54:23.850371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:24.338 [2024-12-14 12:54:23.850380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.338 [2024-12-14 12:54:23.850388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:23.934185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.338 [2024-12-14 12:54:23.934256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:24.338 [2024-12-14 12:54:23.934270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.338 [2024-12-14 12:54:23.934279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:24.003112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.338 [2024-12-14 12:54:24.003166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:24.338 [2024-12-14 12:54:24.003177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.338 [2024-12-14 12:54:24.003193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:24.003276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.338 [2024-12-14 12:54:24.003286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:24.338 [2024-12-14 12:54:24.003296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.338 [2024-12-14 12:54:24.003304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:24.003342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.338 [2024-12-14 12:54:24.003353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:24.338 [2024-12-14 12:54:24.003362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.338 [2024-12-14 12:54:24.003370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:24.003471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.338 [2024-12-14 12:54:24.003483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:24.338 [2024-12-14 12:54:24.003492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.338 [2024-12-14 12:54:24.003500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:24.003532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.338 [2024-12-14 12:54:24.003542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:24.338 [2024-12-14 12:54:24.003550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.338 [2024-12-14 12:54:24.003558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:24.003603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.338 [2024-12-14 12:54:24.003613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:24.338 [2024-12-14 12:54:24.003622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.338 [2024-12-14 12:54:24.003630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:24.003678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:24.338 [2024-12-14 12:54:24.003696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:24.338 [2024-12-14 12:54:24.003704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:24.338 [2024-12-14 12:54:24.003713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.338 [2024-12-14 12:54:24.003850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 689.805 ms, result 0 00:28:25.723 00:28:25.723 00:28:25.723 12:54:25 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:28.266 12:54:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:28.266 [2024-12-14 12:54:27.422112] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:28.266 [2024-12-14 12:54:27.422199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83642 ] 00:28:28.266 [2024-12-14 12:54:27.576285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.266 [2024-12-14 12:54:27.680032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.266 [2024-12-14 12:54:27.976542] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:28.266 [2024-12-14 12:54:27.976625] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:28.528 [2024-12-14 12:54:28.138762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.528 [2024-12-14 12:54:28.139009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:28.528 [2024-12-14 12:54:28.139035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:28.528 [2024-12-14 12:54:28.139045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.528 [2024-12-14 12:54:28.139143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.528 [2024-12-14 12:54:28.139158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:28.528 [2024-12-14 12:54:28.139169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:28:28.528 [2024-12-14 12:54:28.139178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.528 [2024-12-14 12:54:28.139202] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:28.528 [2024-12-14 12:54:28.139908] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:28.528 [2024-12-14 12:54:28.139942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.528 [2024-12-14 12:54:28.139950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:28.528 [2024-12-14 12:54:28.139960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:28:28.528 [2024-12-14 12:54:28.139968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.528 [2024-12-14 12:54:28.141737] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:28.528 [2024-12-14 12:54:28.156409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.528 [2024-12-14 12:54:28.156604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:28.528 [2024-12-14 12:54:28.156802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.672 ms 00:28:28.528 [2024-12-14 12:54:28.156826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.528 [2024-12-14 12:54:28.156926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.528 [2024-12-14 12:54:28.156938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:28.528 [2024-12-14 12:54:28.156946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:28.528 [2024-12-14 12:54:28.156954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.528 [2024-12-14 12:54:28.165306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.528 [2024-12-14 12:54:28.165352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:28.528 [2024-12-14 12:54:28.165363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.269 ms 00:28:28.528 [2024-12-14 12:54:28.165379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.528 [2024-12-14 12:54:28.165474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.529 [2024-12-14 12:54:28.165485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:28.529 [2024-12-14 12:54:28.165494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:28:28.529 [2024-12-14 12:54:28.165503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.529 [2024-12-14 12:54:28.165547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.529 [2024-12-14 12:54:28.165557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:28.529 [2024-12-14 12:54:28.165565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:28.529 [2024-12-14 12:54:28.165572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.529 [2024-12-14 12:54:28.165599] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:28.529 [2024-12-14 12:54:28.169705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.529 [2024-12-14 12:54:28.169744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:28.529 [2024-12-14 12:54:28.169758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.111 ms 00:28:28.529 [2024-12-14 12:54:28.169766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.529 [2024-12-14 12:54:28.169805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.529 [2024-12-14 12:54:28.169814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:28.529 [2024-12-14 12:54:28.169823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:28.529 [2024-12-14 12:54:28.169832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.529 [2024-12-14 12:54:28.169884] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:28.529 [2024-12-14 12:54:28.169911] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:28.529 [2024-12-14 12:54:28.169949] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:28.529 [2024-12-14 12:54:28.169968] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:28.529 [2024-12-14 12:54:28.170091] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:28.529 [2024-12-14 12:54:28.170105] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:28.529 [2024-12-14 12:54:28.170117] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:28.529 [2024-12-14 12:54:28.170127] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:28.529 [2024-12-14 12:54:28.170137] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:28.529 [2024-12-14 12:54:28.170145] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:28.529 [2024-12-14 12:54:28.170153] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:28.529 [2024-12-14 12:54:28.170161] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:28.529 [2024-12-14 12:54:28.170173] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:28.529 [2024-12-14 12:54:28.170181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.529 [2024-12-14 12:54:28.170189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:28.529 [2024-12-14 12:54:28.170197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:28:28.529 [2024-12-14 12:54:28.170204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.529 [2024-12-14 12:54:28.170287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.529 [2024-12-14 12:54:28.170296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:28.529 [2024-12-14 12:54:28.170305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:28:28.529 [2024-12-14 12:54:28.170312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.529 [2024-12-14 12:54:28.170414] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:28.529 [2024-12-14 12:54:28.170425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:28.529 [2024-12-14 12:54:28.170434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:28.529 [2024-12-14 12:54:28.170441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.529 [2024-12-14 12:54:28.170449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:28.529 [2024-12-14 12:54:28.170456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:28.529 [2024-12-14 12:54:28.170463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:28.529 [2024-12-14 12:54:28.170470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:28.529 [2024-12-14 12:54:28.170477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:28.529 [2024-12-14 12:54:28.170484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:28.529 [2024-12-14 12:54:28.170491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:28.529 [2024-12-14 12:54:28.170498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:28.529 [2024-12-14 12:54:28.170505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:28.529 [2024-12-14 12:54:28.170522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:28.529 [2024-12-14 12:54:28.170531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:28.529 [2024-12-14 12:54:28.170539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.529 [2024-12-14 12:54:28.170545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:28.529 [2024-12-14 12:54:28.170552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:28.529 [2024-12-14 12:54:28.170559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.529 [2024-12-14 12:54:28.170566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:28.529 [2024-12-14 12:54:28.170572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:28.529 [2024-12-14 12:54:28.170579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.529 [2024-12-14 12:54:28.170585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:28.529 [2024-12-14 12:54:28.170592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:28.529 [2024-12-14 12:54:28.170599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.529 [2024-12-14 12:54:28.170606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:28.529 [2024-12-14 12:54:28.170613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:28.529 [2024-12-14 12:54:28.170620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.529 [2024-12-14 12:54:28.170627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:28.529 [2024-12-14 12:54:28.170634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:28.529 [2024-12-14 12:54:28.170640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:28.529 [2024-12-14 12:54:28.170647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:28.529 [2024-12-14 12:54:28.170654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:28.529 [2024-12-14 12:54:28.170660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:28.529 [2024-12-14 12:54:28.170667] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:28.529 [2024-12-14 12:54:28.170674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:28.529 [2024-12-14 12:54:28.170681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:28.529 [2024-12-14 12:54:28.170687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:28.529 [2024-12-14 12:54:28.170694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:28.529 [2024-12-14 12:54:28.170700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.529 [2024-12-14 12:54:28.170707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:28.529 [2024-12-14 12:54:28.170713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:28.529 [2024-12-14 12:54:28.170720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.529 [2024-12-14 12:54:28.170726] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:28.529 [2024-12-14 12:54:28.170736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:28.529 [2024-12-14 12:54:28.170743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:28.529 [2024-12-14 12:54:28.170752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:28.529 [2024-12-14 12:54:28.170761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:28.529 [2024-12-14 12:54:28.170769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:28.529 [2024-12-14 12:54:28.170775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:28.529 [2024-12-14 12:54:28.170783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:28.529 [2024-12-14 12:54:28.170789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:28.529 [2024-12-14 12:54:28.170796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:28.529 [2024-12-14 12:54:28.170804] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:28.529 [2024-12-14 12:54:28.170813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:28.529 [2024-12-14 12:54:28.170825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:28.529 [2024-12-14 12:54:28.170833] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:28.529 [2024-12-14 12:54:28.170840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:28.529 [2024-12-14 12:54:28.170847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:28.529 [2024-12-14 12:54:28.170854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:28.529 [2024-12-14 12:54:28.170862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:28.529 [2024-12-14 12:54:28.170869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:28.529 [2024-12-14 12:54:28.170876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:28.529 [2024-12-14 12:54:28.170882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:28.529 [2024-12-14 12:54:28.170890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:28.529 [2024-12-14 12:54:28.170897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:28.529 [2024-12-14 12:54:28.170904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:28.530 [2024-12-14 12:54:28.170911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:28.530 [2024-12-14 12:54:28.170918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:28.530 [2024-12-14 12:54:28.170924] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:28.530 [2024-12-14 12:54:28.170933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:28.530 [2024-12-14 12:54:28.170941] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:28.530 [2024-12-14 12:54:28.170948] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:28.530 [2024-12-14 12:54:28.170955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:28.530 [2024-12-14 12:54:28.170963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:28.530 [2024-12-14 12:54:28.170971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.530 [2024-12-14 12:54:28.170979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:28.530 [2024-12-14 12:54:28.170987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:28:28.530 [2024-12-14 12:54:28.170995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.530 [2024-12-14 12:54:28.203813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.530 [2024-12-14 12:54:28.203859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:28.530 [2024-12-14 12:54:28.203872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.771 ms 00:28:28.530 [2024-12-14 12:54:28.203884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.530 [2024-12-14 12:54:28.203972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.530 [2024-12-14 12:54:28.203980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:28.530 [2024-12-14 12:54:28.203989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:28:28.530 [2024-12-14 12:54:28.203998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.530 [2024-12-14 12:54:28.252345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.530 [2024-12-14 12:54:28.252402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:28.530 [2024-12-14 12:54:28.252415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.268 ms 00:28:28.530 [2024-12-14 12:54:28.252424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.530 [2024-12-14 12:54:28.252479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.530 [2024-12-14 12:54:28.252490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:28.530 [2024-12-14 12:54:28.252503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:28.530 [2024-12-14 12:54:28.252510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.530 [2024-12-14 12:54:28.253115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.530 [2024-12-14 12:54:28.253154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:28.530 [2024-12-14 12:54:28.253165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:28:28.530 [2024-12-14 12:54:28.253174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.530 [2024-12-14 12:54:28.253343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.530 [2024-12-14 12:54:28.253440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:28.530 [2024-12-14 12:54:28.253458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:28:28.530 [2024-12-14 12:54:28.253467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.269770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.791 [2024-12-14 12:54:28.269820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:28.791 [2024-12-14 12:54:28.269832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.276 ms 00:28:28.791 [2024-12-14 12:54:28.269841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.284307] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:28.791 [2024-12-14 12:54:28.284507] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:28.791 [2024-12-14 12:54:28.284529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.791 [2024-12-14 12:54:28.284538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:28.791 [2024-12-14 12:54:28.284547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.566 ms 00:28:28.791 [2024-12-14 12:54:28.284555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.311322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.791 [2024-12-14 12:54:28.311543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:28.791 [2024-12-14 12:54:28.311568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.371 ms 00:28:28.791 [2024-12-14 12:54:28.311577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.324692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.791 [2024-12-14 12:54:28.324741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:28.791 [2024-12-14 12:54:28.324755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.060 ms 00:28:28.791 [2024-12-14 12:54:28.324763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.337531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.791 [2024-12-14 12:54:28.337580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:28.791 [2024-12-14 12:54:28.337592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.715 ms 00:28:28.791 [2024-12-14 12:54:28.337599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.338316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.791 [2024-12-14 12:54:28.338343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:28.791 [2024-12-14 12:54:28.338358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:28:28.791 [2024-12-14 12:54:28.338366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.404707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.791 [2024-12-14 12:54:28.404774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:28.791 [2024-12-14 12:54:28.404799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.317 ms 00:28:28.791 [2024-12-14 12:54:28.404809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.416280] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:28.791 [2024-12-14 12:54:28.419662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.791 [2024-12-14 12:54:28.419711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:28.791 [2024-12-14 12:54:28.419724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.790 ms 00:28:28.791 [2024-12-14 12:54:28.419732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.419828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.791 [2024-12-14 12:54:28.419840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:28.791 [2024-12-14 12:54:28.419851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:28.791 [2024-12-14 12:54:28.419862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.421689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.791 [2024-12-14 12:54:28.421741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:28.791 [2024-12-14 12:54:28.421753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.787 ms 00:28:28.791 [2024-12-14 12:54:28.421762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.421793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.791 [2024-12-14 12:54:28.421803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:28.791 [2024-12-14 12:54:28.421812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:28.791 [2024-12-14 12:54:28.421821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.421872] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:28.791 [2024-12-14 12:54:28.421883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.791 [2024-12-14 12:54:28.421892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:28.791 [2024-12-14 12:54:28.421902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:28.791 [2024-12-14 12:54:28.421910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.448273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.791 [2024-12-14 12:54:28.448455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:28.791 [2024-12-14 12:54:28.448531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.343 ms 00:28:28.791 [2024-12-14 12:54:28.448555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.448647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.791 [2024-12-14 12:54:28.448672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:28.791 [2024-12-14 12:54:28.448693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:28:28.791 [2024-12-14 12:54:28.448713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.791 [2024-12-14 12:54:28.450052] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 310.792 ms, result 0 00:28:30.177  [2024-12-14T12:54:30.899Z] Copying: 988/1048576 [kB] (988 kBps) [2024-12-14T12:54:31.864Z] Copying: 4268/1048576 [kB] (3280 kBps) [2024-12-14T12:54:32.807Z] Copying: 31/1024 [MB] (27 MBps) [2024-12-14T12:54:33.750Z] Copying: 54/1024 [MB] (23 MBps) [2024-12-14T12:54:34.692Z] Copying: 80/1024 [MB] (25 MBps) [2024-12-14T12:54:35.634Z] Copying: 111/1024 [MB] (31 MBps) [2024-12-14T12:54:37.020Z] Copying: 139/1024 [MB] (27 MBps) [2024-12-14T12:54:37.963Z] Copying: 174/1024 [MB] (35 MBps) [2024-12-14T12:54:38.906Z] Copying: 205/1024 [MB] (30 MBps) [2024-12-14T12:54:39.850Z] Copying: 231/1024 [MB] (26 MBps) [2024-12-14T12:54:40.794Z] Copying: 262/1024 [MB] (31 MBps) [2024-12-14T12:54:41.736Z] Copying: 293/1024 [MB] (31 MBps) [2024-12-14T12:54:42.677Z] Copying: 324/1024 [MB] (31 MBps) [2024-12-14T12:54:44.063Z] Copying: 357/1024 [MB] (32 MBps) [2024-12-14T12:54:45.006Z] Copying: 385/1024 [MB] (27 MBps) [2024-12-14T12:54:45.951Z] Copying: 415/1024 [MB] (30 MBps) [2024-12-14T12:54:46.893Z] Copying: 444/1024 [MB] (29 MBps) [2024-12-14T12:54:47.837Z] Copying: 482/1024 [MB] (37 MBps) [2024-12-14T12:54:48.780Z] Copying: 512/1024 [MB] (30 MBps) [2024-12-14T12:54:49.723Z] Copying: 545/1024 [MB] (33 MBps) [2024-12-14T12:54:50.666Z] Copying: 578/1024 [MB] (32 MBps) [2024-12-14T12:54:52.052Z] Copying: 608/1024 [MB] (29 MBps) [2024-12-14T12:54:52.994Z] Copying: 642/1024 [MB] (34 MBps) [2024-12-14T12:54:53.935Z] Copying: 672/1024 [MB] (29 MBps) [2024-12-14T12:54:54.877Z] Copying: 700/1024 [MB] (28 MBps) [2024-12-14T12:54:55.820Z] Copying: 725/1024 [MB] (25 MBps) [2024-12-14T12:54:56.784Z] Copying: 749/1024 [MB] (23 MBps) [2024-12-14T12:54:57.781Z] Copying: 778/1024 [MB] (29 MBps) [2024-12-14T12:54:58.724Z] Copying: 807/1024 [MB] (29 MBps) [2024-12-14T12:54:59.666Z] Copying: 837/1024 [MB] (29 MBps) [2024-12-14T12:55:01.051Z] Copying: 867/1024 [MB] (30 MBps) [2024-12-14T12:55:01.994Z] Copying: 904/1024 [MB] (37 MBps) [2024-12-14T12:55:02.937Z] Copying: 930/1024 [MB] (26 MBps) [2024-12-14T12:55:03.881Z] Copying: 954/1024 [MB] (23 MBps) [2024-12-14T12:55:04.823Z] Copying: 973/1024 [MB] (19 MBps) [2024-12-14T12:55:05.764Z] Copying: 1003/1024 [MB] (29 MBps) [2024-12-14T12:55:06.027Z] Copying: 1024/1024 [MB] (average 27 MBps)[2024-12-14 12:55:05.847488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.290 [2024-12-14 12:55:05.847672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:06.290 [2024-12-14 12:55:05.847695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:06.290 [2024-12-14 12:55:05.847706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.290 [2024-12-14 12:55:05.847737] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:06.290 [2024-12-14 12:55:05.851473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.290 [2024-12-14 12:55:05.851521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:06.290 [2024-12-14 12:55:05.851534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.714 ms 00:29:06.290 [2024-12-14 12:55:05.851544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.290 [2024-12-14 12:55:05.851789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.290 [2024-12-14 12:55:05.851808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:06.290 [2024-12-14 12:55:05.851818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:29:06.290 [2024-12-14 12:55:05.851827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.290 [2024-12-14 12:55:05.867068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.290 [2024-12-14 12:55:05.867121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:06.290 [2024-12-14 12:55:05.867136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.223 ms 00:29:06.290 [2024-12-14 12:55:05.867145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.290 [2024-12-14 12:55:05.873524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.290 [2024-12-14 12:55:05.873688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:06.290 [2024-12-14 12:55:05.873715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.342 ms 00:29:06.290 [2024-12-14 12:55:05.873724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.290 [2024-12-14 12:55:05.900660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.290 [2024-12-14 12:55:05.900835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:06.290 [2024-12-14 12:55:05.900856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.876 ms 00:29:06.290 [2024-12-14 12:55:05.900865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.290 [2024-12-14 12:55:05.917436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.290 [2024-12-14 12:55:05.917485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:06.290 [2024-12-14 12:55:05.917498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.460 ms 00:29:06.290 [2024-12-14 12:55:05.917508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.290 [2024-12-14 12:55:05.922008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.290 [2024-12-14 12:55:05.922076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:06.290 [2024-12-14 12:55:05.922088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.448 ms 00:29:06.290 [2024-12-14 12:55:05.922103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.291 [2024-12-14 12:55:05.947552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.291 [2024-12-14 12:55:05.947724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:06.291 [2024-12-14 12:55:05.947743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.433 ms 00:29:06.291 [2024-12-14 12:55:05.947751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.291 [2024-12-14 12:55:05.973073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.291 [2024-12-14 12:55:05.973119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:06.291 [2024-12-14 12:55:05.973131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.210 ms 00:29:06.291 [2024-12-14 12:55:05.973138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.291 [2024-12-14 12:55:05.997082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.291 [2024-12-14 12:55:05.997126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:06.291 [2024-12-14 12:55:05.997138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.900 ms 00:29:06.291 [2024-12-14 12:55:05.997145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.291 [2024-12-14 12:55:06.021126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.291 [2024-12-14 12:55:06.021168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:06.291 [2024-12-14 12:55:06.021180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.911 ms 00:29:06.291 [2024-12-14 12:55:06.021188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.291 [2024-12-14 12:55:06.021230] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:06.291 [2024-12-14 12:55:06.021247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:06.291 [2024-12-14 12:55:06.021258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:06.291 [2024-12-14 12:55:06.021267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:06.291 [2024-12-14 12:55:06.021805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.021993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.022002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.022010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.022017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.022025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.022032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.022040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:06.292 [2024-12-14 12:55:06.022077] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:06.292 [2024-12-14 12:55:06.022087] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f69b931-ed0c-40a1-9030-509521ad0f68 00:29:06.292 [2024-12-14 12:55:06.022096] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:06.292 [2024-12-14 12:55:06.022104] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 158912 00:29:06.292 [2024-12-14 12:55:06.022115] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 156928 00:29:06.292 [2024-12-14 12:55:06.022124] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0126 00:29:06.292 [2024-12-14 12:55:06.022132] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:06.292 [2024-12-14 12:55:06.022166] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:06.292 [2024-12-14 12:55:06.022175] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:06.292 [2024-12-14 12:55:06.022181] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:06.292 [2024-12-14 12:55:06.022189] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:06.292 [2024-12-14 12:55:06.022197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.292 [2024-12-14 12:55:06.022206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:06.292 [2024-12-14 12:55:06.022225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.968 ms 00:29:06.292 [2024-12-14 12:55:06.022234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.035666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.553 [2024-12-14 12:55:06.035708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:06.553 [2024-12-14 12:55:06.035719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.411 ms 00:29:06.553 [2024-12-14 12:55:06.035727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.036171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.553 [2024-12-14 12:55:06.036185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:06.553 [2024-12-14 12:55:06.036194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:29:06.553 [2024-12-14 12:55:06.036201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.072208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.553 [2024-12-14 12:55:06.072253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:06.553 [2024-12-14 12:55:06.072264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.553 [2024-12-14 12:55:06.072273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.072329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.553 [2024-12-14 12:55:06.072338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:06.553 [2024-12-14 12:55:06.072346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.553 [2024-12-14 12:55:06.072354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.072443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.553 [2024-12-14 12:55:06.072454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:06.553 [2024-12-14 12:55:06.072463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.553 [2024-12-14 12:55:06.072471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.072487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.553 [2024-12-14 12:55:06.072496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:06.553 [2024-12-14 12:55:06.072503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.553 [2024-12-14 12:55:06.072511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.155709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.553 [2024-12-14 12:55:06.155760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:06.553 [2024-12-14 12:55:06.155773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.553 [2024-12-14 12:55:06.155781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.223971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.553 [2024-12-14 12:55:06.224024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:06.553 [2024-12-14 12:55:06.224037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.553 [2024-12-14 12:55:06.224046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.224125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.553 [2024-12-14 12:55:06.224142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:06.553 [2024-12-14 12:55:06.224151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.553 [2024-12-14 12:55:06.224160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.224217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.553 [2024-12-14 12:55:06.224228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:06.553 [2024-12-14 12:55:06.224241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.553 [2024-12-14 12:55:06.224252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.224380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.553 [2024-12-14 12:55:06.224396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:06.553 [2024-12-14 12:55:06.224415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.553 [2024-12-14 12:55:06.224427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.224477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.553 [2024-12-14 12:55:06.224498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:06.553 [2024-12-14 12:55:06.224507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.553 [2024-12-14 12:55:06.224515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.224557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.553 [2024-12-14 12:55:06.224567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:06.553 [2024-12-14 12:55:06.224579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.553 [2024-12-14 12:55:06.224588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.224634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:06.553 [2024-12-14 12:55:06.224645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:06.553 [2024-12-14 12:55:06.224654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:06.553 [2024-12-14 12:55:06.224662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.553 [2024-12-14 12:55:06.224796] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.289 ms, result 0 00:29:07.496 00:29:07.496 00:29:07.496 12:55:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:08.881 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:08.881 12:55:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:09.185 [2024-12-14 12:55:08.649303] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:09.185 [2024-12-14 12:55:08.649394] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84057 ] 00:29:09.185 [2024-12-14 12:55:08.804591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.185 [2024-12-14 12:55:08.903328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.757 [2024-12-14 12:55:09.201816] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:09.757 [2024-12-14 12:55:09.201903] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:09.757 [2024-12-14 12:55:09.363572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.757 [2024-12-14 12:55:09.363637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:09.757 [2024-12-14 12:55:09.363653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:09.757 [2024-12-14 12:55:09.363662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.757 [2024-12-14 12:55:09.363718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.757 [2024-12-14 12:55:09.363732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:09.757 [2024-12-14 12:55:09.363741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:09.757 [2024-12-14 12:55:09.363750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.757 [2024-12-14 12:55:09.363770] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:09.757 [2024-12-14 12:55:09.364662] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:09.757 [2024-12-14 12:55:09.364700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.757 [2024-12-14 12:55:09.364709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:09.757 [2024-12-14 12:55:09.364719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:29:09.757 [2024-12-14 12:55:09.364727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.757 [2024-12-14 12:55:09.366387] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:09.758 [2024-12-14 12:55:09.380521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.758 [2024-12-14 12:55:09.380712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:09.758 [2024-12-14 12:55:09.380736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.137 ms 00:29:09.758 [2024-12-14 12:55:09.380745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.758 [2024-12-14 12:55:09.380888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.758 [2024-12-14 12:55:09.380913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:09.758 [2024-12-14 12:55:09.380924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:09.758 [2024-12-14 12:55:09.380932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.758 [2024-12-14 12:55:09.388869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.758 [2024-12-14 12:55:09.388915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:09.758 [2024-12-14 12:55:09.388925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.856 ms 00:29:09.758 [2024-12-14 12:55:09.388940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.758 [2024-12-14 12:55:09.389019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.758 [2024-12-14 12:55:09.389028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:09.758 [2024-12-14 12:55:09.389037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:29:09.758 [2024-12-14 12:55:09.389046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.758 [2024-12-14 12:55:09.389114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.758 [2024-12-14 12:55:09.389126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:09.758 [2024-12-14 12:55:09.389135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:29:09.758 [2024-12-14 12:55:09.389143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.758 [2024-12-14 12:55:09.389170] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:09.758 [2024-12-14 12:55:09.393066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.758 [2024-12-14 12:55:09.393104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:09.758 [2024-12-14 12:55:09.393120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.901 ms 00:29:09.758 [2024-12-14 12:55:09.393128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.758 [2024-12-14 12:55:09.393165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.758 [2024-12-14 12:55:09.393175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:09.758 [2024-12-14 12:55:09.393185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:09.758 [2024-12-14 12:55:09.393193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.758 [2024-12-14 12:55:09.393242] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:09.758 [2024-12-14 12:55:09.393267] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:09.758 [2024-12-14 12:55:09.393304] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:09.758 [2024-12-14 12:55:09.393323] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:09.758 [2024-12-14 12:55:09.393443] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:09.758 [2024-12-14 12:55:09.393459] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:09.758 [2024-12-14 12:55:09.393471] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:09.758 [2024-12-14 12:55:09.393481] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:09.758 [2024-12-14 12:55:09.393491] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:09.758 [2024-12-14 12:55:09.393499] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:09.758 [2024-12-14 12:55:09.393507] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:09.758 [2024-12-14 12:55:09.393515] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:09.758 [2024-12-14 12:55:09.393526] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:09.758 [2024-12-14 12:55:09.393534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.758 [2024-12-14 12:55:09.393542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:09.758 [2024-12-14 12:55:09.393551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:29:09.758 [2024-12-14 12:55:09.393559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.758 [2024-12-14 12:55:09.393643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.758 [2024-12-14 12:55:09.393652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:09.758 [2024-12-14 12:55:09.393660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:09.758 [2024-12-14 12:55:09.393667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.758 [2024-12-14 12:55:09.393768] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:09.758 [2024-12-14 12:55:09.393779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:09.758 [2024-12-14 12:55:09.393787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:09.758 [2024-12-14 12:55:09.393796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:09.758 [2024-12-14 12:55:09.393805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:09.758 [2024-12-14 12:55:09.393812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:09.758 [2024-12-14 12:55:09.393819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:09.758 [2024-12-14 12:55:09.393826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:09.758 [2024-12-14 12:55:09.393833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:09.758 [2024-12-14 12:55:09.393840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:09.758 [2024-12-14 12:55:09.393847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:09.758 [2024-12-14 12:55:09.393856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:09.758 [2024-12-14 12:55:09.393864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:09.758 [2024-12-14 12:55:09.393880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:09.758 [2024-12-14 12:55:09.393887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:09.758 [2024-12-14 12:55:09.393893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:09.758 [2024-12-14 12:55:09.393900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:09.758 [2024-12-14 12:55:09.393906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:09.758 [2024-12-14 12:55:09.393913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:09.758 [2024-12-14 12:55:09.393921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:09.758 [2024-12-14 12:55:09.393927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:09.758 [2024-12-14 12:55:09.393934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:09.758 [2024-12-14 12:55:09.393940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:09.758 [2024-12-14 12:55:09.393948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:09.758 [2024-12-14 12:55:09.393954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:09.758 [2024-12-14 12:55:09.393961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:09.758 [2024-12-14 12:55:09.393968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:09.758 [2024-12-14 12:55:09.393975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:09.758 [2024-12-14 12:55:09.393981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:09.758 [2024-12-14 12:55:09.393989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:09.758 [2024-12-14 12:55:09.393995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:09.758 [2024-12-14 12:55:09.394003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:09.758 [2024-12-14 12:55:09.394010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:09.758 [2024-12-14 12:55:09.394017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:09.758 [2024-12-14 12:55:09.394024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:09.758 [2024-12-14 12:55:09.394030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:09.758 [2024-12-14 12:55:09.394037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:09.758 [2024-12-14 12:55:09.394044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:09.758 [2024-12-14 12:55:09.394051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:09.758 [2024-12-14 12:55:09.394091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:09.758 [2024-12-14 12:55:09.394099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:09.758 [2024-12-14 12:55:09.394106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:09.758 [2024-12-14 12:55:09.394113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:09.758 [2024-12-14 12:55:09.394121] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:09.758 [2024-12-14 12:55:09.394129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:09.758 [2024-12-14 12:55:09.394138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:09.758 [2024-12-14 12:55:09.394146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:09.758 [2024-12-14 12:55:09.394155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:09.758 [2024-12-14 12:55:09.394162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:09.758 [2024-12-14 12:55:09.394169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:09.758 [2024-12-14 12:55:09.394177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:09.758 [2024-12-14 12:55:09.394184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:09.758 [2024-12-14 12:55:09.394191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:09.758 [2024-12-14 12:55:09.394200] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:09.758 [2024-12-14 12:55:09.394210] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:09.758 [2024-12-14 12:55:09.394227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:09.758 [2024-12-14 12:55:09.394235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:09.758 [2024-12-14 12:55:09.394243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:09.759 [2024-12-14 12:55:09.394250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:09.759 [2024-12-14 12:55:09.394258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:09.759 [2024-12-14 12:55:09.394265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:09.759 [2024-12-14 12:55:09.394272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:09.759 [2024-12-14 12:55:09.394279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:09.759 [2024-12-14 12:55:09.394287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:09.759 [2024-12-14 12:55:09.394294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:09.759 [2024-12-14 12:55:09.394301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:09.759 [2024-12-14 12:55:09.394308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:09.759 [2024-12-14 12:55:09.394315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:09.759 [2024-12-14 12:55:09.394322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:09.759 [2024-12-14 12:55:09.394329] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:09.759 [2024-12-14 12:55:09.394337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:09.759 [2024-12-14 12:55:09.394345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:09.759 [2024-12-14 12:55:09.394353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:09.759 [2024-12-14 12:55:09.394360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:09.759 [2024-12-14 12:55:09.394367] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:09.759 [2024-12-14 12:55:09.394376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.759 [2024-12-14 12:55:09.394383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:09.759 [2024-12-14 12:55:09.394392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:29:09.759 [2024-12-14 12:55:09.394399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.759 [2024-12-14 12:55:09.425995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.759 [2024-12-14 12:55:09.426203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:09.759 [2024-12-14 12:55:09.426266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.551 ms 00:29:09.759 [2024-12-14 12:55:09.426299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.759 [2024-12-14 12:55:09.426405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.759 [2024-12-14 12:55:09.426428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:09.759 [2024-12-14 12:55:09.426448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:29:09.759 [2024-12-14 12:55:09.426468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.759 [2024-12-14 12:55:09.470258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.759 [2024-12-14 12:55:09.470447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:09.759 [2024-12-14 12:55:09.470515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.716 ms 00:29:09.759 [2024-12-14 12:55:09.470540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.759 [2024-12-14 12:55:09.470602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.759 [2024-12-14 12:55:09.470636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:09.759 [2024-12-14 12:55:09.470663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:09.759 [2024-12-14 12:55:09.470684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.759 [2024-12-14 12:55:09.471286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.759 [2024-12-14 12:55:09.471428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:09.759 [2024-12-14 12:55:09.471483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:29:09.759 [2024-12-14 12:55:09.471506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.759 [2024-12-14 12:55:09.471679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.759 [2024-12-14 12:55:09.471704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:09.759 [2024-12-14 12:55:09.471762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:29:09.759 [2024-12-14 12:55:09.471786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:09.759 [2024-12-14 12:55:09.487698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:09.759 [2024-12-14 12:55:09.487856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:09.759 [2024-12-14 12:55:09.487911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.877 ms 00:29:09.759 [2024-12-14 12:55:09.487934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.020 [2024-12-14 12:55:09.502273] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:10.020 [2024-12-14 12:55:09.502452] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:10.020 [2024-12-14 12:55:09.502517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.020 [2024-12-14 12:55:09.502539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:10.020 [2024-12-14 12:55:09.502561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.441 ms 00:29:10.020 [2024-12-14 12:55:09.502579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.020 [2024-12-14 12:55:09.528537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.020 [2024-12-14 12:55:09.528705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:10.020 [2024-12-14 12:55:09.528767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.904 ms 00:29:10.020 [2024-12-14 12:55:09.528790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.020 [2024-12-14 12:55:09.541788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.020 [2024-12-14 12:55:09.541944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:10.020 [2024-12-14 12:55:09.541999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.924 ms 00:29:10.020 [2024-12-14 12:55:09.542022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.020 [2024-12-14 12:55:09.554360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.020 [2024-12-14 12:55:09.554510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:10.020 [2024-12-14 12:55:09.554565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.196 ms 00:29:10.020 [2024-12-14 12:55:09.554588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.020 [2024-12-14 12:55:09.555405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.020 [2024-12-14 12:55:09.555477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:10.020 [2024-12-14 12:55:09.555585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:29:10.020 [2024-12-14 12:55:09.555611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.020 [2024-12-14 12:55:09.617676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.020 [2024-12-14 12:55:09.617909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:10.020 [2024-12-14 12:55:09.617986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.028 ms 00:29:10.020 [2024-12-14 12:55:09.618010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.021 [2024-12-14 12:55:09.629557] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:10.021 [2024-12-14 12:55:09.632486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.021 [2024-12-14 12:55:09.632622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:10.021 [2024-12-14 12:55:09.632677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.396 ms 00:29:10.021 [2024-12-14 12:55:09.632700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.021 [2024-12-14 12:55:09.632802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.021 [2024-12-14 12:55:09.632829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:10.021 [2024-12-14 12:55:09.632851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:29:10.021 [2024-12-14 12:55:09.632874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.021 [2024-12-14 12:55:09.633719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.021 [2024-12-14 12:55:09.633864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:10.021 [2024-12-14 12:55:09.633920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:29:10.021 [2024-12-14 12:55:09.633932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.021 [2024-12-14 12:55:09.633971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.021 [2024-12-14 12:55:09.633982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:10.021 [2024-12-14 12:55:09.633992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:10.021 [2024-12-14 12:55:09.634002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.021 [2024-12-14 12:55:09.634047] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:10.021 [2024-12-14 12:55:09.634080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.021 [2024-12-14 12:55:09.634089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:10.021 [2024-12-14 12:55:09.634098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:10.021 [2024-12-14 12:55:09.634106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.021 [2024-12-14 12:55:09.659202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.021 [2024-12-14 12:55:09.659371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:10.021 [2024-12-14 12:55:09.659399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.076 ms 00:29:10.021 [2024-12-14 12:55:09.659408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.021 [2024-12-14 12:55:09.659484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:10.021 [2024-12-14 12:55:09.659495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:10.021 [2024-12-14 12:55:09.659504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:29:10.021 [2024-12-14 12:55:09.659512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:10.021 [2024-12-14 12:55:09.661319] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 297.242 ms, result 0 00:29:11.407  [2024-12-14T12:55:12.092Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-14T12:55:13.035Z] Copying: 30/1024 [MB] (16 MBps) [2024-12-14T12:55:13.978Z] Copying: 46/1024 [MB] (16 MBps) [2024-12-14T12:55:14.922Z] Copying: 64/1024 [MB] (17 MBps) [2024-12-14T12:55:15.867Z] Copying: 78/1024 [MB] (14 MBps) [2024-12-14T12:55:17.253Z] Copying: 96/1024 [MB] (18 MBps) [2024-12-14T12:55:18.196Z] Copying: 116/1024 [MB] (19 MBps) [2024-12-14T12:55:19.138Z] Copying: 130/1024 [MB] (13 MBps) [2024-12-14T12:55:20.082Z] Copying: 145/1024 [MB] (14 MBps) [2024-12-14T12:55:21.027Z] Copying: 160/1024 [MB] (14 MBps) [2024-12-14T12:55:21.970Z] Copying: 175/1024 [MB] (14 MBps) [2024-12-14T12:55:23.001Z] Copying: 192/1024 [MB] (17 MBps) [2024-12-14T12:55:23.944Z] Copying: 213/1024 [MB] (21 MBps) [2024-12-14T12:55:24.887Z] Copying: 232/1024 [MB] (19 MBps) [2024-12-14T12:55:26.272Z] Copying: 252/1024 [MB] (19 MBps) [2024-12-14T12:55:26.844Z] Copying: 266/1024 [MB] (14 MBps) [2024-12-14T12:55:28.231Z] Copying: 283/1024 [MB] (16 MBps) [2024-12-14T12:55:29.175Z] Copying: 305/1024 [MB] (21 MBps) [2024-12-14T12:55:30.123Z] Copying: 324/1024 [MB] (19 MBps) [2024-12-14T12:55:31.067Z] Copying: 337/1024 [MB] (12 MBps) [2024-12-14T12:55:32.011Z] Copying: 350/1024 [MB] (13 MBps) [2024-12-14T12:55:32.954Z] Copying: 363/1024 [MB] (13 MBps) [2024-12-14T12:55:33.899Z] Copying: 379/1024 [MB] (16 MBps) [2024-12-14T12:55:34.842Z] Copying: 394/1024 [MB] (15 MBps) [2024-12-14T12:55:36.228Z] Copying: 406/1024 [MB] (11 MBps) [2024-12-14T12:55:37.172Z] Copying: 422/1024 [MB] (16 MBps) [2024-12-14T12:55:38.117Z] Copying: 433/1024 [MB] (10 MBps) [2024-12-14T12:55:39.061Z] Copying: 443/1024 [MB] (10 MBps) [2024-12-14T12:55:40.005Z] Copying: 454/1024 [MB] (10 MBps) [2024-12-14T12:55:40.949Z] Copying: 465/1024 [MB] (10 MBps) [2024-12-14T12:55:41.892Z] Copying: 476/1024 [MB] (10 MBps) [2024-12-14T12:55:43.276Z] Copying: 490/1024 [MB] (13 MBps) [2024-12-14T12:55:43.849Z] Copying: 505/1024 [MB] (15 MBps) [2024-12-14T12:55:45.236Z] Copying: 522/1024 [MB] (16 MBps) [2024-12-14T12:55:46.179Z] Copying: 538/1024 [MB] (16 MBps) [2024-12-14T12:55:47.120Z] Copying: 555/1024 [MB] (17 MBps) [2024-12-14T12:55:48.064Z] Copying: 582/1024 [MB] (26 MBps) [2024-12-14T12:55:49.059Z] Copying: 603/1024 [MB] (21 MBps) [2024-12-14T12:55:50.003Z] Copying: 621/1024 [MB] (17 MBps) [2024-12-14T12:55:50.947Z] Copying: 634/1024 [MB] (13 MBps) [2024-12-14T12:55:51.891Z] Copying: 650/1024 [MB] (16 MBps) [2024-12-14T12:55:53.277Z] Copying: 670/1024 [MB] (19 MBps) [2024-12-14T12:55:53.848Z] Copying: 691/1024 [MB] (20 MBps) [2024-12-14T12:55:55.234Z] Copying: 702/1024 [MB] (11 MBps) [2024-12-14T12:55:56.176Z] Copying: 721/1024 [MB] (18 MBps) [2024-12-14T12:55:57.119Z] Copying: 745/1024 [MB] (24 MBps) [2024-12-14T12:55:58.062Z] Copying: 765/1024 [MB] (19 MBps) [2024-12-14T12:55:59.008Z] Copying: 786/1024 [MB] (20 MBps) [2024-12-14T12:55:59.952Z] Copying: 807/1024 [MB] (20 MBps) [2024-12-14T12:56:00.897Z] Copying: 819/1024 [MB] (12 MBps) [2024-12-14T12:56:01.840Z] Copying: 830/1024 [MB] (10 MBps) [2024-12-14T12:56:03.225Z] Copying: 843/1024 [MB] (13 MBps) [2024-12-14T12:56:04.169Z] Copying: 855/1024 [MB] (11 MBps) [2024-12-14T12:56:05.112Z] Copying: 883/1024 [MB] (27 MBps) [2024-12-14T12:56:06.054Z] Copying: 897/1024 [MB] (13 MBps) [2024-12-14T12:56:06.999Z] Copying: 907/1024 [MB] (10 MBps) [2024-12-14T12:56:07.943Z] Copying: 918/1024 [MB] (10 MBps) [2024-12-14T12:56:08.885Z] Copying: 929/1024 [MB] (11 MBps) [2024-12-14T12:56:10.274Z] Copying: 943/1024 [MB] (13 MBps) [2024-12-14T12:56:10.846Z] Copying: 954/1024 [MB] (11 MBps) [2024-12-14T12:56:12.232Z] Copying: 965/1024 [MB] (10 MBps) [2024-12-14T12:56:13.177Z] Copying: 983/1024 [MB] (17 MBps) [2024-12-14T12:56:14.121Z] Copying: 993/1024 [MB] (10 MBps) [2024-12-14T12:56:15.110Z] Copying: 1005/1024 [MB] (11 MBps) [2024-12-14T12:56:15.110Z] Copying: 1020/1024 [MB] (15 MBps) [2024-12-14T12:56:15.377Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-14 12:56:15.245625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.640 [2024-12-14 12:56:15.245925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:15.640 [2024-12-14 12:56:15.245947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:15.640 [2024-12-14 12:56:15.245958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.640 [2024-12-14 12:56:15.245987] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:15.640 [2024-12-14 12:56:15.249656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.640 [2024-12-14 12:56:15.249719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:15.640 [2024-12-14 12:56:15.249732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.648 ms 00:30:15.640 [2024-12-14 12:56:15.249741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.640 [2024-12-14 12:56:15.250021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.640 [2024-12-14 12:56:15.250043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:15.640 [2024-12-14 12:56:15.250069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:30:15.640 [2024-12-14 12:56:15.250080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.640 [2024-12-14 12:56:15.253642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.640 [2024-12-14 12:56:15.253671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:15.640 [2024-12-14 12:56:15.253682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.546 ms 00:30:15.640 [2024-12-14 12:56:15.253697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.640 [2024-12-14 12:56:15.260843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.640 [2024-12-14 12:56:15.260890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:15.640 [2024-12-14 12:56:15.260903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.124 ms 00:30:15.640 [2024-12-14 12:56:15.260912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.640 [2024-12-14 12:56:15.289833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.640 [2024-12-14 12:56:15.289889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:15.640 [2024-12-14 12:56:15.289904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.847 ms 00:30:15.641 [2024-12-14 12:56:15.289913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.641 [2024-12-14 12:56:15.309739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.641 [2024-12-14 12:56:15.309821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:15.641 [2024-12-14 12:56:15.309841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.767 ms 00:30:15.641 [2024-12-14 12:56:15.309850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.641 [2024-12-14 12:56:15.313700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.641 [2024-12-14 12:56:15.313757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:15.641 [2024-12-14 12:56:15.313771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.767 ms 00:30:15.641 [2024-12-14 12:56:15.313782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.641 [2024-12-14 12:56:15.340181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.641 [2024-12-14 12:56:15.340232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:15.641 [2024-12-14 12:56:15.340246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.380 ms 00:30:15.641 [2024-12-14 12:56:15.340254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.641 [2024-12-14 12:56:15.365654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.641 [2024-12-14 12:56:15.365701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:15.641 [2024-12-14 12:56:15.365715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.350 ms 00:30:15.641 [2024-12-14 12:56:15.365722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.902 [2024-12-14 12:56:15.390585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.902 [2024-12-14 12:56:15.390634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:15.902 [2024-12-14 12:56:15.390646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.813 ms 00:30:15.902 [2024-12-14 12:56:15.390654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.902 [2024-12-14 12:56:15.415778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.902 [2024-12-14 12:56:15.415827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:15.902 [2024-12-14 12:56:15.415840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.044 ms 00:30:15.902 [2024-12-14 12:56:15.415848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.902 [2024-12-14 12:56:15.415895] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:15.902 [2024-12-14 12:56:15.415920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:15.902 [2024-12-14 12:56:15.415936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:15.902 [2024-12-14 12:56:15.415945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.415953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.415962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.415971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.415979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.415987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.415995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:15.903 [2024-12-14 12:56:15.416698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:15.904 [2024-12-14 12:56:15.416706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:15.904 [2024-12-14 12:56:15.416715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:15.904 [2024-12-14 12:56:15.416722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:15.904 [2024-12-14 12:56:15.416729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:15.904 [2024-12-14 12:56:15.416737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:15.904 [2024-12-14 12:56:15.416744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:15.904 [2024-12-14 12:56:15.416752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:15.904 [2024-12-14 12:56:15.416768] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:15.904 [2024-12-14 12:56:15.416777] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1f69b931-ed0c-40a1-9030-509521ad0f68 00:30:15.904 [2024-12-14 12:56:15.416785] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:15.904 [2024-12-14 12:56:15.416793] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:15.904 [2024-12-14 12:56:15.416800] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:15.904 [2024-12-14 12:56:15.416809] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:15.904 [2024-12-14 12:56:15.416823] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:15.904 [2024-12-14 12:56:15.416832] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:15.904 [2024-12-14 12:56:15.416841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:15.904 [2024-12-14 12:56:15.416848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:15.904 [2024-12-14 12:56:15.416855] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:15.904 [2024-12-14 12:56:15.416863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.904 [2024-12-14 12:56:15.416871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:15.904 [2024-12-14 12:56:15.416881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.969 ms 00:30:15.904 [2024-12-14 12:56:15.416891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.430474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.904 [2024-12-14 12:56:15.430671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:15.904 [2024-12-14 12:56:15.430691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.562 ms 00:30:15.904 [2024-12-14 12:56:15.430700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.431137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:15.904 [2024-12-14 12:56:15.431161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:15.904 [2024-12-14 12:56:15.431172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:30:15.904 [2024-12-14 12:56:15.431179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.468158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.904 [2024-12-14 12:56:15.468211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:15.904 [2024-12-14 12:56:15.468223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.904 [2024-12-14 12:56:15.468234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.468316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.904 [2024-12-14 12:56:15.468333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:15.904 [2024-12-14 12:56:15.468343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.904 [2024-12-14 12:56:15.468352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.468453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.904 [2024-12-14 12:56:15.468466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:15.904 [2024-12-14 12:56:15.468475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.904 [2024-12-14 12:56:15.468484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.468502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.904 [2024-12-14 12:56:15.468511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:15.904 [2024-12-14 12:56:15.468523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.904 [2024-12-14 12:56:15.468531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.553281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.904 [2024-12-14 12:56:15.553515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:15.904 [2024-12-14 12:56:15.553539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.904 [2024-12-14 12:56:15.553548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.623104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.904 [2024-12-14 12:56:15.623167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:15.904 [2024-12-14 12:56:15.623181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.904 [2024-12-14 12:56:15.623190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.623253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.904 [2024-12-14 12:56:15.623264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:15.904 [2024-12-14 12:56:15.623273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.904 [2024-12-14 12:56:15.623282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.623347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.904 [2024-12-14 12:56:15.623359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:15.904 [2024-12-14 12:56:15.623368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.904 [2024-12-14 12:56:15.623380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.623477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.904 [2024-12-14 12:56:15.623488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:15.904 [2024-12-14 12:56:15.623497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.904 [2024-12-14 12:56:15.623505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.623539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.904 [2024-12-14 12:56:15.623549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:15.904 [2024-12-14 12:56:15.623558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.904 [2024-12-14 12:56:15.623566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.623615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.904 [2024-12-14 12:56:15.623624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:15.904 [2024-12-14 12:56:15.623633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.904 [2024-12-14 12:56:15.623642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.623695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:15.904 [2024-12-14 12:56:15.623713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:15.904 [2024-12-14 12:56:15.623723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:15.904 [2024-12-14 12:56:15.623735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:15.904 [2024-12-14 12:56:15.623873] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 378.228 ms, result 0 00:30:16.848 00:30:16.848 00:30:16.848 12:56:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:19.397 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:19.397 12:56:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:19.397 12:56:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:19.397 12:56:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:19.397 12:56:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:19.397 12:56:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:19.397 12:56:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:19.397 12:56:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:19.397 Process with pid 82080 is not found 00:30:19.397 12:56:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 82080 00:30:19.397 12:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82080 ']' 00:30:19.397 12:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 82080 00:30:19.397 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82080) - No such process 00:30:19.397 12:56:18 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 82080 is not found' 00:30:19.397 12:56:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:19.397 Remove shared memory files 00:30:19.397 12:56:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:19.397 12:56:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:19.397 12:56:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:19.397 12:56:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:19.397 12:56:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:19.659 12:56:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:19.659 12:56:19 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:19.659 ************************************ 00:30:19.659 END TEST ftl_dirty_shutdown 00:30:19.659 ************************************ 00:30:19.659 00:30:19.659 real 4m18.548s 00:30:19.659 user 4m51.287s 00:30:19.659 sys 0m28.650s 00:30:19.659 12:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:19.659 12:56:19 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:19.659 12:56:19 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:19.659 12:56:19 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:19.659 12:56:19 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:19.659 12:56:19 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:19.659 ************************************ 00:30:19.659 START TEST ftl_upgrade_shutdown 00:30:19.659 ************************************ 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:19.659 * Looking for test storage... 00:30:19.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:19.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.659 --rc genhtml_branch_coverage=1 00:30:19.659 --rc genhtml_function_coverage=1 00:30:19.659 --rc genhtml_legend=1 00:30:19.659 --rc geninfo_all_blocks=1 00:30:19.659 --rc geninfo_unexecuted_blocks=1 00:30:19.659 00:30:19.659 ' 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:19.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.659 --rc genhtml_branch_coverage=1 00:30:19.659 --rc genhtml_function_coverage=1 00:30:19.659 --rc genhtml_legend=1 00:30:19.659 --rc geninfo_all_blocks=1 00:30:19.659 --rc geninfo_unexecuted_blocks=1 00:30:19.659 00:30:19.659 ' 00:30:19.659 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:19.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.660 --rc genhtml_branch_coverage=1 00:30:19.660 --rc genhtml_function_coverage=1 00:30:19.660 --rc genhtml_legend=1 00:30:19.660 --rc geninfo_all_blocks=1 00:30:19.660 --rc geninfo_unexecuted_blocks=1 00:30:19.660 00:30:19.660 ' 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:19.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.660 --rc genhtml_branch_coverage=1 00:30:19.660 --rc genhtml_function_coverage=1 00:30:19.660 --rc genhtml_legend=1 00:30:19.660 --rc geninfo_all_blocks=1 00:30:19.660 --rc geninfo_unexecuted_blocks=1 00:30:19.660 00:30:19.660 ' 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84837 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84837 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84837 ']' 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:19.660 12:56:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:19.921 [2024-12-14 12:56:19.470345] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:19.921 [2024-12-14 12:56:19.470718] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84837 ] 00:30:19.921 [2024-12-14 12:56:19.633691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.183 [2024-12-14 12:56:19.737039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:20.756 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:21.017 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:21.017 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:21.017 12:56:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:21.017 12:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:21.017 12:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:21.017 12:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:21.017 12:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:21.017 12:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:21.278 12:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:21.278 { 00:30:21.278 "name": "basen1", 00:30:21.278 "aliases": [ 00:30:21.278 "e1af60da-f889-4cbb-96b1-b1b52d53fa07" 00:30:21.278 ], 00:30:21.278 "product_name": "NVMe disk", 00:30:21.278 "block_size": 4096, 00:30:21.278 "num_blocks": 1310720, 00:30:21.278 "uuid": "e1af60da-f889-4cbb-96b1-b1b52d53fa07", 00:30:21.278 "numa_id": -1, 00:30:21.278 "assigned_rate_limits": { 00:30:21.279 "rw_ios_per_sec": 0, 00:30:21.279 "rw_mbytes_per_sec": 0, 00:30:21.279 "r_mbytes_per_sec": 0, 00:30:21.279 "w_mbytes_per_sec": 0 00:30:21.279 }, 00:30:21.279 "claimed": true, 00:30:21.279 "claim_type": "read_many_write_one", 00:30:21.279 "zoned": false, 00:30:21.279 "supported_io_types": { 00:30:21.279 "read": true, 00:30:21.279 "write": true, 00:30:21.279 "unmap": true, 00:30:21.279 "flush": true, 00:30:21.279 "reset": true, 00:30:21.279 "nvme_admin": true, 00:30:21.279 "nvme_io": true, 00:30:21.279 "nvme_io_md": false, 00:30:21.279 "write_zeroes": true, 00:30:21.279 "zcopy": false, 00:30:21.279 "get_zone_info": false, 00:30:21.279 "zone_management": false, 00:30:21.279 "zone_append": false, 00:30:21.279 "compare": true, 00:30:21.279 "compare_and_write": false, 00:30:21.279 "abort": true, 00:30:21.279 "seek_hole": false, 00:30:21.279 "seek_data": false, 00:30:21.279 "copy": true, 00:30:21.279 "nvme_iov_md": false 00:30:21.279 }, 00:30:21.279 "driver_specific": { 00:30:21.279 "nvme": [ 00:30:21.279 { 00:30:21.279 "pci_address": "0000:00:11.0", 00:30:21.279 "trid": { 00:30:21.279 "trtype": "PCIe", 00:30:21.279 "traddr": "0000:00:11.0" 00:30:21.279 }, 00:30:21.279 "ctrlr_data": { 00:30:21.279 "cntlid": 0, 00:30:21.279 "vendor_id": "0x1b36", 00:30:21.279 "model_number": "QEMU NVMe Ctrl", 00:30:21.279 "serial_number": "12341", 00:30:21.279 "firmware_revision": "8.0.0", 00:30:21.279 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:21.279 "oacs": { 00:30:21.279 "security": 0, 00:30:21.279 "format": 1, 00:30:21.279 "firmware": 0, 00:30:21.279 "ns_manage": 1 00:30:21.279 }, 00:30:21.279 "multi_ctrlr": false, 00:30:21.279 "ana_reporting": false 00:30:21.279 }, 00:30:21.279 "vs": { 00:30:21.279 "nvme_version": "1.4" 00:30:21.279 }, 00:30:21.279 "ns_data": { 00:30:21.279 "id": 1, 00:30:21.279 "can_share": false 00:30:21.279 } 00:30:21.279 } 00:30:21.279 ], 00:30:21.279 "mp_policy": "active_passive" 00:30:21.279 } 00:30:21.279 } 00:30:21.279 ]' 00:30:21.279 12:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:21.279 12:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:21.279 12:56:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:21.279 12:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:21.279 12:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:21.279 12:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:21.279 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:21.279 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:21.279 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:21.279 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:21.279 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:21.540 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=2a6c18bc-2997-42a2-a6e9-1edd0fd53d24 00:30:21.540 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:21.540 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2a6c18bc-2997-42a2-a6e9-1edd0fd53d24 00:30:21.801 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=3e7250b4-b3ef-4129-ae85-3da469eade75 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 3e7250b4-b3ef-4129-ae85-3da469eade75 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=0b3b0a9b-5122-466a-899d-519ee173101d 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 0b3b0a9b-5122-466a-899d-519ee173101d ]] 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 0b3b0a9b-5122-466a-899d-519ee173101d 5120 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=0b3b0a9b-5122-466a-899d-519ee173101d 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 0b3b0a9b-5122-466a-899d-519ee173101d 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=0b3b0a9b-5122-466a-899d-519ee173101d 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:22.063 12:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0b3b0a9b-5122-466a-899d-519ee173101d 00:30:22.325 12:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:22.325 { 00:30:22.325 "name": "0b3b0a9b-5122-466a-899d-519ee173101d", 00:30:22.325 "aliases": [ 00:30:22.325 "lvs/basen1p0" 00:30:22.325 ], 00:30:22.325 "product_name": "Logical Volume", 00:30:22.325 "block_size": 4096, 00:30:22.325 "num_blocks": 5242880, 00:30:22.325 "uuid": "0b3b0a9b-5122-466a-899d-519ee173101d", 00:30:22.325 "assigned_rate_limits": { 00:30:22.325 "rw_ios_per_sec": 0, 00:30:22.325 "rw_mbytes_per_sec": 0, 00:30:22.325 "r_mbytes_per_sec": 0, 00:30:22.325 "w_mbytes_per_sec": 0 00:30:22.325 }, 00:30:22.325 "claimed": false, 00:30:22.325 "zoned": false, 00:30:22.325 "supported_io_types": { 00:30:22.325 "read": true, 00:30:22.325 "write": true, 00:30:22.325 "unmap": true, 00:30:22.325 "flush": false, 00:30:22.325 "reset": true, 00:30:22.325 "nvme_admin": false, 00:30:22.325 "nvme_io": false, 00:30:22.325 "nvme_io_md": false, 00:30:22.325 "write_zeroes": true, 00:30:22.325 "zcopy": false, 00:30:22.325 "get_zone_info": false, 00:30:22.325 "zone_management": false, 00:30:22.325 "zone_append": false, 00:30:22.325 "compare": false, 00:30:22.325 "compare_and_write": false, 00:30:22.325 "abort": false, 00:30:22.325 "seek_hole": true, 00:30:22.325 "seek_data": true, 00:30:22.325 "copy": false, 00:30:22.325 "nvme_iov_md": false 00:30:22.325 }, 00:30:22.325 "driver_specific": { 00:30:22.325 "lvol": { 00:30:22.325 "lvol_store_uuid": "3e7250b4-b3ef-4129-ae85-3da469eade75", 00:30:22.325 "base_bdev": "basen1", 00:30:22.325 "thin_provision": true, 00:30:22.325 "num_allocated_clusters": 0, 00:30:22.325 "snapshot": false, 00:30:22.325 "clone": false, 00:30:22.325 "esnap_clone": false 00:30:22.325 } 00:30:22.325 } 00:30:22.325 } 00:30:22.325 ]' 00:30:22.325 12:56:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:22.325 12:56:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:22.325 12:56:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:22.586 12:56:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:22.586 12:56:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:22.586 12:56:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:22.586 12:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:22.586 12:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:22.586 12:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:22.586 12:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:22.586 12:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:22.586 12:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:22.848 12:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:22.848 12:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:22.848 12:56:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 0b3b0a9b-5122-466a-899d-519ee173101d -c cachen1p0 --l2p_dram_limit 2 00:30:23.110 [2024-12-14 12:56:22.712404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.110 [2024-12-14 12:56:22.712442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:23.110 [2024-12-14 12:56:22.712454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:23.110 [2024-12-14 12:56:22.712461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.110 [2024-12-14 12:56:22.712503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.110 [2024-12-14 12:56:22.712511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:23.110 [2024-12-14 12:56:22.712518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:23.110 [2024-12-14 12:56:22.712524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.110 [2024-12-14 12:56:22.712540] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:23.111 [2024-12-14 12:56:22.713188] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:23.111 [2024-12-14 12:56:22.713231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.111 [2024-12-14 12:56:22.713248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:23.111 [2024-12-14 12:56:22.713267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.692 ms 00:30:23.111 [2024-12-14 12:56:22.713281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.111 [2024-12-14 12:56:22.713365] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 49f72a2b-0888-4059-a320-156546a3e0aa 00:30:23.111 [2024-12-14 12:56:22.714366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.111 [2024-12-14 12:56:22.714458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:23.111 [2024-12-14 12:56:22.714510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:23.111 [2024-12-14 12:56:22.714530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.111 [2024-12-14 12:56:22.719285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.111 [2024-12-14 12:56:22.719389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:23.111 [2024-12-14 12:56:22.719436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.706 ms 00:30:23.111 [2024-12-14 12:56:22.719455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.111 [2024-12-14 12:56:22.719494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.111 [2024-12-14 12:56:22.719513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:23.111 [2024-12-14 12:56:22.719528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:23.111 [2024-12-14 12:56:22.719581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.111 [2024-12-14 12:56:22.719632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.111 [2024-12-14 12:56:22.719653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:23.111 [2024-12-14 12:56:22.719668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:23.111 [2024-12-14 12:56:22.719688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.111 [2024-12-14 12:56:22.719712] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:23.111 [2024-12-14 12:56:22.722684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.111 [2024-12-14 12:56:22.722780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:23.111 [2024-12-14 12:56:22.722833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.974 ms 00:30:23.111 [2024-12-14 12:56:22.722842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.111 [2024-12-14 12:56:22.722866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.111 [2024-12-14 12:56:22.722873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:23.111 [2024-12-14 12:56:22.722881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:23.111 [2024-12-14 12:56:22.722887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.111 [2024-12-14 12:56:22.722901] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:23.111 [2024-12-14 12:56:22.723009] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:23.111 [2024-12-14 12:56:22.723021] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:23.111 [2024-12-14 12:56:22.723029] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:23.111 [2024-12-14 12:56:22.723039] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:23.111 [2024-12-14 12:56:22.723045] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:23.111 [2024-12-14 12:56:22.723053] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:23.111 [2024-12-14 12:56:22.723080] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:23.111 [2024-12-14 12:56:22.723090] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:23.111 [2024-12-14 12:56:22.723096] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:23.111 [2024-12-14 12:56:22.723103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.111 [2024-12-14 12:56:22.723108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:23.111 [2024-12-14 12:56:22.723116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.203 ms 00:30:23.111 [2024-12-14 12:56:22.723121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.111 [2024-12-14 12:56:22.723188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.111 [2024-12-14 12:56:22.723198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:23.111 [2024-12-14 12:56:22.723206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:23.111 [2024-12-14 12:56:22.723211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.111 [2024-12-14 12:56:22.723286] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:23.111 [2024-12-14 12:56:22.723294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:23.111 [2024-12-14 12:56:22.723301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:23.111 [2024-12-14 12:56:22.723307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.111 [2024-12-14 12:56:22.723314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:23.111 [2024-12-14 12:56:22.723319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:23.111 [2024-12-14 12:56:22.723326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:23.111 [2024-12-14 12:56:22.723331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:23.111 [2024-12-14 12:56:22.723337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:23.111 [2024-12-14 12:56:22.723342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.111 [2024-12-14 12:56:22.723348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:23.111 [2024-12-14 12:56:22.723353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:23.111 [2024-12-14 12:56:22.723360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.111 [2024-12-14 12:56:22.723367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:23.111 [2024-12-14 12:56:22.723374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:23.111 [2024-12-14 12:56:22.723379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.111 [2024-12-14 12:56:22.723387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:23.111 [2024-12-14 12:56:22.723391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:23.111 [2024-12-14 12:56:22.723399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.111 [2024-12-14 12:56:22.723404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:23.111 [2024-12-14 12:56:22.723410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:23.111 [2024-12-14 12:56:22.723415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:23.111 [2024-12-14 12:56:22.723421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:23.111 [2024-12-14 12:56:22.723426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:23.111 [2024-12-14 12:56:22.723432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:23.111 [2024-12-14 12:56:22.723437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:23.111 [2024-12-14 12:56:22.723443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:23.111 [2024-12-14 12:56:22.723448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:23.111 [2024-12-14 12:56:22.723454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:23.111 [2024-12-14 12:56:22.723460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:23.111 [2024-12-14 12:56:22.723466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:23.111 [2024-12-14 12:56:22.723471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:23.111 [2024-12-14 12:56:22.723478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:23.111 [2024-12-14 12:56:22.723483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.111 [2024-12-14 12:56:22.723489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:23.111 [2024-12-14 12:56:22.723494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:23.111 [2024-12-14 12:56:22.723501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.111 [2024-12-14 12:56:22.723505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:23.111 [2024-12-14 12:56:22.723512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:23.111 [2024-12-14 12:56:22.723517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.111 [2024-12-14 12:56:22.723523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:23.111 [2024-12-14 12:56:22.723528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:23.111 [2024-12-14 12:56:22.723535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.111 [2024-12-14 12:56:22.723539] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:23.111 [2024-12-14 12:56:22.723547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:23.111 [2024-12-14 12:56:22.723552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:23.111 [2024-12-14 12:56:22.723559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:23.111 [2024-12-14 12:56:22.723564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:23.111 [2024-12-14 12:56:22.723572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:23.111 [2024-12-14 12:56:22.723577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:23.111 [2024-12-14 12:56:22.723586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:23.111 [2024-12-14 12:56:22.723591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:23.111 [2024-12-14 12:56:22.723597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:23.111 [2024-12-14 12:56:22.723603] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:23.111 [2024-12-14 12:56:22.723611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:23.111 [2024-12-14 12:56:22.723620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:23.111 [2024-12-14 12:56:22.723627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:23.112 [2024-12-14 12:56:22.723632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:23.112 [2024-12-14 12:56:22.723639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:23.112 [2024-12-14 12:56:22.723644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:23.112 [2024-12-14 12:56:22.723651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:23.112 [2024-12-14 12:56:22.723656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:23.112 [2024-12-14 12:56:22.723663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:23.112 [2024-12-14 12:56:22.723668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:23.112 [2024-12-14 12:56:22.723678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:23.112 [2024-12-14 12:56:22.723683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:23.112 [2024-12-14 12:56:22.723690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:23.112 [2024-12-14 12:56:22.723695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:23.112 [2024-12-14 12:56:22.723702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:23.112 [2024-12-14 12:56:22.723708] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:23.112 [2024-12-14 12:56:22.723715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:23.112 [2024-12-14 12:56:22.723721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:23.112 [2024-12-14 12:56:22.723728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:23.112 [2024-12-14 12:56:22.723733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:23.112 [2024-12-14 12:56:22.723740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:23.112 [2024-12-14 12:56:22.723746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:23.112 [2024-12-14 12:56:22.723753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:23.112 [2024-12-14 12:56:22.723759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.513 ms 00:30:23.112 [2024-12-14 12:56:22.723766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:23.112 [2024-12-14 12:56:22.723805] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:23.112 [2024-12-14 12:56:22.723815] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:27.330 [2024-12-14 12:56:26.490494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.490582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:27.330 [2024-12-14 12:56:26.490601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3766.673 ms 00:30:27.330 [2024-12-14 12:56:26.490614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.521682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.521746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:27.330 [2024-12-14 12:56:26.521762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.817 ms 00:30:27.330 [2024-12-14 12:56:26.521774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.521876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.521891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:27.330 [2024-12-14 12:56:26.521900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:27.330 [2024-12-14 12:56:26.521917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.557356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.557611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:27.330 [2024-12-14 12:56:26.557632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.402 ms 00:30:27.330 [2024-12-14 12:56:26.557644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.557680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.557698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:27.330 [2024-12-14 12:56:26.557708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:27.330 [2024-12-14 12:56:26.557718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.558321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.558349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:27.330 [2024-12-14 12:56:26.558367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.547 ms 00:30:27.330 [2024-12-14 12:56:26.558377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.558423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.558435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:27.330 [2024-12-14 12:56:26.558446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:27.330 [2024-12-14 12:56:26.558459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.575764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.575815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:27.330 [2024-12-14 12:56:26.575827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.285 ms 00:30:27.330 [2024-12-14 12:56:26.575837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.597812] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:27.330 [2024-12-14 12:56:26.599326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.599380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:27.330 [2024-12-14 12:56:26.599396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.399 ms 00:30:27.330 [2024-12-14 12:56:26.599405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.629140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.629191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:27.330 [2024-12-14 12:56:26.629208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.683 ms 00:30:27.330 [2024-12-14 12:56:26.629217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.629324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.629339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:27.330 [2024-12-14 12:56:26.629355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:30:27.330 [2024-12-14 12:56:26.629363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.654437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.654482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:27.330 [2024-12-14 12:56:26.654498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.005 ms 00:30:27.330 [2024-12-14 12:56:26.654506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.679600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.679644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:27.330 [2024-12-14 12:56:26.679660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.038 ms 00:30:27.330 [2024-12-14 12:56:26.679667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.680312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.680333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:27.330 [2024-12-14 12:56:26.680346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.598 ms 00:30:27.330 [2024-12-14 12:56:26.680357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.764092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.764138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:27.330 [2024-12-14 12:56:26.764174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 83.690 ms 00:30:27.330 [2024-12-14 12:56:26.764183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.791593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.791780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:27.330 [2024-12-14 12:56:26.791809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.312 ms 00:30:27.330 [2024-12-14 12:56:26.791818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.817802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.817849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:27.330 [2024-12-14 12:56:26.817864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.889 ms 00:30:27.330 [2024-12-14 12:56:26.817872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.843954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.844001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:27.330 [2024-12-14 12:56:26.844017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.025 ms 00:30:27.330 [2024-12-14 12:56:26.844026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.844098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.844109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:27.330 [2024-12-14 12:56:26.844124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:27.330 [2024-12-14 12:56:26.844132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.844232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.330 [2024-12-14 12:56:26.844245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:27.330 [2024-12-14 12:56:26.844256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:30:27.330 [2024-12-14 12:56:26.844264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.330 [2024-12-14 12:56:26.845552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4132.616 ms, result 0 00:30:27.330 { 00:30:27.330 "name": "ftl", 00:30:27.330 "uuid": "49f72a2b-0888-4059-a320-156546a3e0aa" 00:30:27.330 } 00:30:27.330 12:56:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:27.592 [2024-12-14 12:56:27.076548] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.592 12:56:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:27.592 12:56:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:27.853 [2024-12-14 12:56:27.505024] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:27.853 12:56:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:28.114 [2024-12-14 12:56:27.726434] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:28.114 12:56:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:28.374 Fill FTL, iteration 1 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84965 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84965 /var/tmp/spdk.tgt.sock 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84965 ']' 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.374 12:56:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:28.635 [2024-12-14 12:56:28.171758] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:28.635 [2024-12-14 12:56:28.171894] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84965 ] 00:30:28.635 [2024-12-14 12:56:28.331924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.895 [2024-12-14 12:56:28.437623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.468 12:56:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:29.468 12:56:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:29.468 12:56:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:29.728 ftln1 00:30:29.728 12:56:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:29.728 12:56:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:29.989 12:56:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:29.989 12:56:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84965 00:30:29.989 12:56:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84965 ']' 00:30:29.989 12:56:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84965 00:30:29.989 12:56:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:29.989 12:56:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:29.989 12:56:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84965 00:30:29.989 killing process with pid 84965 00:30:29.989 12:56:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:29.989 12:56:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:29.990 12:56:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84965' 00:30:29.990 12:56:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84965 00:30:29.990 12:56:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84965 00:30:31.376 12:56:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:31.376 12:56:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:31.376 [2024-12-14 12:56:31.036716] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:31.376 [2024-12-14 12:56:31.036829] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85011 ] 00:30:31.637 [2024-12-14 12:56:31.191834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.637 [2024-12-14 12:56:31.268112] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:33.024  [2024-12-14T12:56:33.704Z] Copying: 252/1024 [MB] (252 MBps) [2024-12-14T12:56:34.648Z] Copying: 493/1024 [MB] (241 MBps) [2024-12-14T12:56:35.586Z] Copying: 733/1024 [MB] (240 MBps) [2024-12-14T12:56:35.848Z] Copying: 969/1024 [MB] (236 MBps) [2024-12-14T12:56:36.420Z] Copying: 1024/1024 [MB] (average 241 MBps) 00:30:36.683 00:30:36.683 12:56:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:36.683 12:56:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:36.683 Calculate MD5 checksum, iteration 1 00:30:36.683 12:56:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:36.683 12:56:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:36.683 12:56:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:36.683 12:56:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:36.683 12:56:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:36.683 12:56:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:36.943 [2024-12-14 12:56:36.454523] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:36.943 [2024-12-14 12:56:36.454833] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85065 ] 00:30:36.943 [2024-12-14 12:56:36.615159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.204 [2024-12-14 12:56:36.699030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:38.264  [2024-12-14T12:56:38.944Z] Copying: 697/1024 [MB] (697 MBps) [2024-12-14T12:56:39.518Z] Copying: 1024/1024 [MB] (average 610 MBps) 00:30:39.781 00:30:39.781 12:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:39.781 12:56:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:42.326 12:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:42.326 Fill FTL, iteration 2 00:30:42.326 12:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=62d7002dc13d6804821a7a35f540305e 00:30:42.326 12:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:42.326 12:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:42.326 12:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:42.326 12:56:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:42.326 12:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:42.326 12:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:42.326 12:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:42.326 12:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:42.326 12:56:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:42.326 [2024-12-14 12:56:41.753474] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:42.326 [2024-12-14 12:56:41.753572] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85121 ] 00:30:42.326 [2024-12-14 12:56:41.910333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.326 [2024-12-14 12:56:42.019786] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.710  [2024-12-14T12:56:44.822Z] Copying: 187/1024 [MB] (187 MBps) [2024-12-14T12:56:45.756Z] Copying: 425/1024 [MB] (238 MBps) [2024-12-14T12:56:46.689Z] Copying: 664/1024 [MB] (239 MBps) [2024-12-14T12:56:46.948Z] Copying: 904/1024 [MB] (240 MBps) [2024-12-14T12:56:47.515Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:30:47.778 00:30:47.778 12:56:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:47.778 Calculate MD5 checksum, iteration 2 00:30:47.778 12:56:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:47.779 12:56:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:47.779 12:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:47.779 12:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:47.779 12:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:47.779 12:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:47.779 12:56:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:48.036 [2024-12-14 12:56:47.580216] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:48.036 [2024-12-14 12:56:47.580344] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85185 ] 00:30:48.036 [2024-12-14 12:56:47.738991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.293 [2024-12-14 12:56:47.826435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.668  [2024-12-14T12:56:49.971Z] Copying: 635/1024 [MB] (635 MBps) [2024-12-14T12:56:50.909Z] Copying: 1024/1024 [MB] (average 640 MBps) 00:30:51.172 00:30:51.172 12:56:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:51.172 12:56:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:53.714 12:56:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:53.714 12:56:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=db615604e14dcc6b9dd95d89771532a6 00:30:53.714 12:56:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:53.714 12:56:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:53.714 12:56:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:53.714 [2024-12-14 12:56:53.058607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.714 [2024-12-14 12:56:53.058648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:53.714 [2024-12-14 12:56:53.058659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:53.714 [2024-12-14 12:56:53.058665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.714 [2024-12-14 12:56:53.058684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.714 [2024-12-14 12:56:53.058694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:53.714 [2024-12-14 12:56:53.058700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:53.714 [2024-12-14 12:56:53.058706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.714 [2024-12-14 12:56:53.058721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.714 [2024-12-14 12:56:53.058728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:53.714 [2024-12-14 12:56:53.058734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:53.714 [2024-12-14 12:56:53.058740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.715 [2024-12-14 12:56:53.058789] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.170 ms, result 0 00:30:53.715 true 00:30:53.715 12:56:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:53.715 { 00:30:53.715 "name": "ftl", 00:30:53.715 "properties": [ 00:30:53.715 { 00:30:53.715 "name": "superblock_version", 00:30:53.715 "value": 5, 00:30:53.715 "read-only": true 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "name": "base_device", 00:30:53.715 "bands": [ 00:30:53.715 { 00:30:53.715 "id": 0, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 1, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 2, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 3, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 4, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 5, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 6, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 7, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 8, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 9, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 10, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 11, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 12, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 13, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 14, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 15, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 16, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 17, 00:30:53.715 "state": "FREE", 00:30:53.715 "validity": 0.0 00:30:53.715 } 00:30:53.715 ], 00:30:53.715 "read-only": true 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "name": "cache_device", 00:30:53.715 "type": "bdev", 00:30:53.715 "chunks": [ 00:30:53.715 { 00:30:53.715 "id": 0, 00:30:53.715 "state": "INACTIVE", 00:30:53.715 "utilization": 0.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 1, 00:30:53.715 "state": "CLOSED", 00:30:53.715 "utilization": 1.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 2, 00:30:53.715 "state": "CLOSED", 00:30:53.715 "utilization": 1.0 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 3, 00:30:53.715 "state": "OPEN", 00:30:53.715 "utilization": 0.001953125 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "id": 4, 00:30:53.715 "state": "OPEN", 00:30:53.715 "utilization": 0.0 00:30:53.715 } 00:30:53.715 ], 00:30:53.715 "read-only": true 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "name": "verbose_mode", 00:30:53.715 "value": true, 00:30:53.715 "unit": "", 00:30:53.715 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:53.715 }, 00:30:53.715 { 00:30:53.715 "name": "prep_upgrade_on_shutdown", 00:30:53.715 "value": false, 00:30:53.715 "unit": "", 00:30:53.715 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:53.715 } 00:30:53.715 ] 00:30:53.715 } 00:30:53.715 12:56:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:53.976 [2024-12-14 12:56:53.466882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.976 [2024-12-14 12:56:53.467001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:53.976 [2024-12-14 12:56:53.467050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:53.976 [2024-12-14 12:56:53.467083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.976 [2024-12-14 12:56:53.467116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.976 [2024-12-14 12:56:53.467132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:53.976 [2024-12-14 12:56:53.467147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:53.976 [2024-12-14 12:56:53.467161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.976 [2024-12-14 12:56:53.467184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:53.976 [2024-12-14 12:56:53.467199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:53.976 [2024-12-14 12:56:53.467214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:53.976 [2024-12-14 12:56:53.467265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:53.976 [2024-12-14 12:56:53.467326] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.431 ms, result 0 00:30:53.976 true 00:30:53.976 12:56:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:53.976 12:56:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:53.976 12:56:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:53.976 12:56:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:53.976 12:56:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:53.976 12:56:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:54.237 [2024-12-14 12:56:53.827188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.237 [2024-12-14 12:56:53.827289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:54.237 [2024-12-14 12:56:53.827329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:54.237 [2024-12-14 12:56:53.827346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.237 [2024-12-14 12:56:53.827375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.237 [2024-12-14 12:56:53.827391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:54.237 [2024-12-14 12:56:53.827406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:54.237 [2024-12-14 12:56:53.827420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.237 [2024-12-14 12:56:53.827443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:54.237 [2024-12-14 12:56:53.827458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:54.237 [2024-12-14 12:56:53.827473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:54.237 [2024-12-14 12:56:53.827531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:54.237 [2024-12-14 12:56:53.827587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.385 ms, result 0 00:30:54.237 true 00:30:54.237 12:56:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:54.498 { 00:30:54.498 "name": "ftl", 00:30:54.498 "properties": [ 00:30:54.498 { 00:30:54.498 "name": "superblock_version", 00:30:54.498 "value": 5, 00:30:54.498 "read-only": true 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "name": "base_device", 00:30:54.498 "bands": [ 00:30:54.498 { 00:30:54.498 "id": 0, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 1, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 2, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 3, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 4, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 5, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 6, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 7, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 8, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 9, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 10, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 11, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 12, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 13, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 14, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 15, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 16, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 }, 00:30:54.498 { 00:30:54.498 "id": 17, 00:30:54.498 "state": "FREE", 00:30:54.498 "validity": 0.0 00:30:54.498 } 00:30:54.498 ], 00:30:54.498 "read-only": true 00:30:54.498 }, 00:30:54.498 { 00:30:54.499 "name": "cache_device", 00:30:54.499 "type": "bdev", 00:30:54.499 "chunks": [ 00:30:54.499 { 00:30:54.499 "id": 0, 00:30:54.499 "state": "INACTIVE", 00:30:54.499 "utilization": 0.0 00:30:54.499 }, 00:30:54.499 { 00:30:54.499 "id": 1, 00:30:54.499 "state": "CLOSED", 00:30:54.499 "utilization": 1.0 00:30:54.499 }, 00:30:54.499 { 00:30:54.499 "id": 2, 00:30:54.499 "state": "CLOSED", 00:30:54.499 "utilization": 1.0 00:30:54.499 }, 00:30:54.499 { 00:30:54.499 "id": 3, 00:30:54.499 "state": "OPEN", 00:30:54.499 "utilization": 0.001953125 00:30:54.499 }, 00:30:54.499 { 00:30:54.499 "id": 4, 00:30:54.499 "state": "OPEN", 00:30:54.499 "utilization": 0.0 00:30:54.499 } 00:30:54.499 ], 00:30:54.499 "read-only": true 00:30:54.499 }, 00:30:54.499 { 00:30:54.499 "name": "verbose_mode", 00:30:54.499 "value": true, 00:30:54.499 "unit": "", 00:30:54.499 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:54.499 }, 00:30:54.499 { 00:30:54.499 "name": "prep_upgrade_on_shutdown", 00:30:54.499 "value": true, 00:30:54.499 "unit": "", 00:30:54.499 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:54.499 } 00:30:54.499 ] 00:30:54.499 } 00:30:54.499 12:56:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:54.499 12:56:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84837 ]] 00:30:54.499 12:56:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84837 00:30:54.499 12:56:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84837 ']' 00:30:54.499 12:56:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84837 00:30:54.499 12:56:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:54.499 12:56:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.499 12:56:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84837 00:30:54.499 12:56:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:54.499 12:56:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:54.499 killing process with pid 84837 00:30:54.499 12:56:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84837' 00:30:54.499 12:56:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84837 00:30:54.499 12:56:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84837 00:30:55.070 [2024-12-14 12:56:54.593929] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:55.070 [2024-12-14 12:56:54.604352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.070 [2024-12-14 12:56:54.604387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:55.070 [2024-12-14 12:56:54.604397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:55.070 [2024-12-14 12:56:54.604403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.070 [2024-12-14 12:56:54.604421] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:55.070 [2024-12-14 12:56:54.606605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.070 [2024-12-14 12:56:54.606631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:55.070 [2024-12-14 12:56:54.606639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.173 ms 00:30:55.070 [2024-12-14 12:56:54.606649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.207 [2024-12-14 12:57:01.979071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.207 [2024-12-14 12:57:01.979240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:03.207 [2024-12-14 12:57:01.979258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7372.378 ms 00:31:03.207 [2024-12-14 12:57:01.979271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.207 [2024-12-14 12:57:01.980352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.207 [2024-12-14 12:57:01.980367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:03.207 [2024-12-14 12:57:01.980374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.066 ms 00:31:03.207 [2024-12-14 12:57:01.980381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.207 [2024-12-14 12:57:01.981247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.207 [2024-12-14 12:57:01.981267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:03.207 [2024-12-14 12:57:01.981276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.847 ms 00:31:03.207 [2024-12-14 12:57:01.981282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.207 [2024-12-14 12:57:01.990102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.207 [2024-12-14 12:57:01.990209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:03.207 [2024-12-14 12:57:01.990222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.788 ms 00:31:03.207 [2024-12-14 12:57:01.990228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.207 [2024-12-14 12:57:01.995979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.207 [2024-12-14 12:57:01.996005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:03.207 [2024-12-14 12:57:01.996014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.728 ms 00:31:03.207 [2024-12-14 12:57:01.996022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.207 [2024-12-14 12:57:01.996098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.208 [2024-12-14 12:57:01.996107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:03.208 [2024-12-14 12:57:01.996119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:31:03.208 [2024-12-14 12:57:01.996125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.004155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.208 [2024-12-14 12:57:02.004179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:03.208 [2024-12-14 12:57:02.004187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.017 ms 00:31:03.208 [2024-12-14 12:57:02.004193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.012228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.208 [2024-12-14 12:57:02.012252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:03.208 [2024-12-14 12:57:02.012259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.010 ms 00:31:03.208 [2024-12-14 12:57:02.012264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.019852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.208 [2024-12-14 12:57:02.019876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:03.208 [2024-12-14 12:57:02.019883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.563 ms 00:31:03.208 [2024-12-14 12:57:02.019888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.027597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.208 [2024-12-14 12:57:02.027621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:03.208 [2024-12-14 12:57:02.027628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.651 ms 00:31:03.208 [2024-12-14 12:57:02.027635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.027659] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:03.208 [2024-12-14 12:57:02.027678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:03.208 [2024-12-14 12:57:02.027687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:03.208 [2024-12-14 12:57:02.027693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:03.208 [2024-12-14 12:57:02.027700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:03.208 [2024-12-14 12:57:02.027790] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:03.208 [2024-12-14 12:57:02.027797] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 49f72a2b-0888-4059-a320-156546a3e0aa 00:31:03.208 [2024-12-14 12:57:02.027803] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:03.208 [2024-12-14 12:57:02.027809] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:03.208 [2024-12-14 12:57:02.027815] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:03.208 [2024-12-14 12:57:02.027821] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:03.208 [2024-12-14 12:57:02.027827] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:03.208 [2024-12-14 12:57:02.027835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:03.208 [2024-12-14 12:57:02.027840] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:03.208 [2024-12-14 12:57:02.027845] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:03.208 [2024-12-14 12:57:02.027851] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:03.208 [2024-12-14 12:57:02.027858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.208 [2024-12-14 12:57:02.027868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:03.208 [2024-12-14 12:57:02.027875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.200 ms 00:31:03.208 [2024-12-14 12:57:02.027881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.038062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.208 [2024-12-14 12:57:02.038084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:03.208 [2024-12-14 12:57:02.038093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.162 ms 00:31:03.208 [2024-12-14 12:57:02.038103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.038386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:03.208 [2024-12-14 12:57:02.038394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:03.208 [2024-12-14 12:57:02.038401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:31:03.208 [2024-12-14 12:57:02.038407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.073200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.208 [2024-12-14 12:57:02.073225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:03.208 [2024-12-14 12:57:02.073237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.208 [2024-12-14 12:57:02.073244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.073270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.208 [2024-12-14 12:57:02.073277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:03.208 [2024-12-14 12:57:02.073284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.208 [2024-12-14 12:57:02.073290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.073358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.208 [2024-12-14 12:57:02.073367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:03.208 [2024-12-14 12:57:02.073374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.208 [2024-12-14 12:57:02.073380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.073406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.208 [2024-12-14 12:57:02.073412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:03.208 [2024-12-14 12:57:02.073419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.208 [2024-12-14 12:57:02.073426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.135845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.208 [2024-12-14 12:57:02.135889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:03.208 [2024-12-14 12:57:02.135899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.208 [2024-12-14 12:57:02.135910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.187031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.208 [2024-12-14 12:57:02.187072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:03.208 [2024-12-14 12:57:02.187081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.208 [2024-12-14 12:57:02.187087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.187151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.208 [2024-12-14 12:57:02.187159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:03.208 [2024-12-14 12:57:02.187166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.208 [2024-12-14 12:57:02.187172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.187227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.208 [2024-12-14 12:57:02.187238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:03.208 [2024-12-14 12:57:02.187246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.208 [2024-12-14 12:57:02.187252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.187327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.208 [2024-12-14 12:57:02.187335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:03.208 [2024-12-14 12:57:02.187342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.208 [2024-12-14 12:57:02.187349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.187375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.208 [2024-12-14 12:57:02.187386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:03.208 [2024-12-14 12:57:02.187393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.208 [2024-12-14 12:57:02.187399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.187437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.208 [2024-12-14 12:57:02.187445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:03.208 [2024-12-14 12:57:02.187452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.208 [2024-12-14 12:57:02.187458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.208 [2024-12-14 12:57:02.187502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:03.208 [2024-12-14 12:57:02.187511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:03.209 [2024-12-14 12:57:02.187518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:03.209 [2024-12-14 12:57:02.187524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:03.209 [2024-12-14 12:57:02.187640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7583.230 ms, result 0 00:31:07.409 12:57:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:07.409 12:57:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:07.409 12:57:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:07.409 12:57:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:07.409 12:57:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:07.409 12:57:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85361 00:31:07.409 12:57:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:07.409 12:57:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85361 00:31:07.409 12:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85361 ']' 00:31:07.409 12:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.409 12:57:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:07.409 12:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:07.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.409 12:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.410 12:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:07.410 12:57:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:07.410 [2024-12-14 12:57:06.932679] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:07.410 [2024-12-14 12:57:06.932809] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85361 ] 00:31:07.410 [2024-12-14 12:57:07.090187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:07.668 [2024-12-14 12:57:07.189011] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.235 [2024-12-14 12:57:07.814821] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:08.235 [2024-12-14 12:57:07.814880] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:08.235 [2024-12-14 12:57:07.970336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.235 [2024-12-14 12:57:07.970388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:08.235 [2024-12-14 12:57:07.970402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:08.235 [2024-12-14 12:57:07.970410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.235 [2024-12-14 12:57:07.970467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.235 [2024-12-14 12:57:07.970478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:08.235 [2024-12-14 12:57:07.970486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:31:08.235 [2024-12-14 12:57:07.970494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.235 [2024-12-14 12:57:07.970519] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:08.496 [2024-12-14 12:57:07.971209] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:08.496 [2024-12-14 12:57:07.971235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.496 [2024-12-14 12:57:07.971243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:08.496 [2024-12-14 12:57:07.971251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.724 ms 00:31:08.496 [2024-12-14 12:57:07.971259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.496 [2024-12-14 12:57:07.972561] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:08.496 [2024-12-14 12:57:07.985657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.496 [2024-12-14 12:57:07.985697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:08.496 [2024-12-14 12:57:07.985800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.098 ms 00:31:08.496 [2024-12-14 12:57:07.985808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.496 [2024-12-14 12:57:07.985875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.496 [2024-12-14 12:57:07.985886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:08.496 [2024-12-14 12:57:07.985894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:08.496 [2024-12-14 12:57:07.985901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.496 [2024-12-14 12:57:07.992424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.496 [2024-12-14 12:57:07.992460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:08.496 [2024-12-14 12:57:07.992469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.445 ms 00:31:08.496 [2024-12-14 12:57:07.992477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.496 [2024-12-14 12:57:07.992537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.496 [2024-12-14 12:57:07.992547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:08.496 [2024-12-14 12:57:07.992555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:31:08.496 [2024-12-14 12:57:07.992563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.496 [2024-12-14 12:57:07.992607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.496 [2024-12-14 12:57:07.992621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:08.496 [2024-12-14 12:57:07.992629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:08.496 [2024-12-14 12:57:07.992637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.496 [2024-12-14 12:57:07.992661] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:08.496 [2024-12-14 12:57:07.996324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.496 [2024-12-14 12:57:07.996360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:08.496 [2024-12-14 12:57:07.996370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.668 ms 00:31:08.496 [2024-12-14 12:57:07.996381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.496 [2024-12-14 12:57:07.996410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.496 [2024-12-14 12:57:07.996418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:08.496 [2024-12-14 12:57:07.996427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:08.496 [2024-12-14 12:57:07.996434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.496 [2024-12-14 12:57:07.996471] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:08.496 [2024-12-14 12:57:07.996493] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:08.496 [2024-12-14 12:57:07.996531] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:08.496 [2024-12-14 12:57:07.996546] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:08.496 [2024-12-14 12:57:07.996652] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:08.496 [2024-12-14 12:57:07.996662] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:08.496 [2024-12-14 12:57:07.996673] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:08.496 [2024-12-14 12:57:07.996683] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:08.496 [2024-12-14 12:57:07.996691] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:08.496 [2024-12-14 12:57:07.996703] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:08.496 [2024-12-14 12:57:07.996710] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:08.496 [2024-12-14 12:57:07.996718] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:08.496 [2024-12-14 12:57:07.996725] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:08.496 [2024-12-14 12:57:07.996733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.496 [2024-12-14 12:57:07.996741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:08.496 [2024-12-14 12:57:07.996748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.265 ms 00:31:08.496 [2024-12-14 12:57:07.996755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.496 [2024-12-14 12:57:07.996840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.496 [2024-12-14 12:57:07.996848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:08.496 [2024-12-14 12:57:07.996859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:31:08.496 [2024-12-14 12:57:07.996866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.496 [2024-12-14 12:57:07.996981] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:08.496 [2024-12-14 12:57:07.996992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:08.496 [2024-12-14 12:57:07.997000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:08.496 [2024-12-14 12:57:07.997008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.496 [2024-12-14 12:57:07.997016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:08.496 [2024-12-14 12:57:07.997023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:08.496 [2024-12-14 12:57:07.997031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:08.496 [2024-12-14 12:57:07.997038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:08.496 [2024-12-14 12:57:07.997044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:08.496 [2024-12-14 12:57:07.997051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.496 [2024-12-14 12:57:07.997074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:08.496 [2024-12-14 12:57:07.997081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:08.496 [2024-12-14 12:57:07.997087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.496 [2024-12-14 12:57:07.997094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:08.496 [2024-12-14 12:57:07.997101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:08.496 [2024-12-14 12:57:07.997108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.496 [2024-12-14 12:57:07.997115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:08.496 [2024-12-14 12:57:07.997121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:08.496 [2024-12-14 12:57:07.997128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.496 [2024-12-14 12:57:07.997135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:08.496 [2024-12-14 12:57:07.997141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:08.496 [2024-12-14 12:57:07.997148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:08.496 [2024-12-14 12:57:07.997155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:08.496 [2024-12-14 12:57:07.997168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:08.496 [2024-12-14 12:57:07.997175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:08.496 [2024-12-14 12:57:07.997181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:08.496 [2024-12-14 12:57:07.997188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:08.496 [2024-12-14 12:57:07.997194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:08.496 [2024-12-14 12:57:07.997201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:08.496 [2024-12-14 12:57:07.997208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:08.496 [2024-12-14 12:57:07.997215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:08.496 [2024-12-14 12:57:07.997222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:08.496 [2024-12-14 12:57:07.997228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:08.496 [2024-12-14 12:57:07.997234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.497 [2024-12-14 12:57:07.997241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:08.497 [2024-12-14 12:57:07.997247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:08.497 [2024-12-14 12:57:07.997254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.497 [2024-12-14 12:57:07.997260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:08.497 [2024-12-14 12:57:07.997267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:08.497 [2024-12-14 12:57:07.997274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.497 [2024-12-14 12:57:07.997280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:08.497 [2024-12-14 12:57:07.997286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:08.497 [2024-12-14 12:57:07.997294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.497 [2024-12-14 12:57:07.997301] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:08.497 [2024-12-14 12:57:07.997308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:08.497 [2024-12-14 12:57:07.997316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:08.497 [2024-12-14 12:57:07.997323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:08.497 [2024-12-14 12:57:07.997334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:08.497 [2024-12-14 12:57:07.997341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:08.497 [2024-12-14 12:57:07.997347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:08.497 [2024-12-14 12:57:07.997354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:08.497 [2024-12-14 12:57:07.997360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:08.497 [2024-12-14 12:57:07.997367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:08.497 [2024-12-14 12:57:07.997375] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:08.497 [2024-12-14 12:57:07.997407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:08.497 [2024-12-14 12:57:07.997416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:08.497 [2024-12-14 12:57:07.997424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:08.497 [2024-12-14 12:57:07.997431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:08.497 [2024-12-14 12:57:07.997438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:08.497 [2024-12-14 12:57:07.997446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:08.497 [2024-12-14 12:57:07.997453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:08.497 [2024-12-14 12:57:07.997460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:08.497 [2024-12-14 12:57:07.997467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:08.497 [2024-12-14 12:57:07.997474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:08.497 [2024-12-14 12:57:07.997481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:08.497 [2024-12-14 12:57:07.997488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:08.497 [2024-12-14 12:57:07.997496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:08.497 [2024-12-14 12:57:07.997503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:08.497 [2024-12-14 12:57:07.997511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:08.497 [2024-12-14 12:57:07.997518] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:08.497 [2024-12-14 12:57:07.997526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:08.497 [2024-12-14 12:57:07.997534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:08.497 [2024-12-14 12:57:07.997542] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:08.497 [2024-12-14 12:57:07.997549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:08.497 [2024-12-14 12:57:07.997560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:08.497 [2024-12-14 12:57:07.997568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:08.497 [2024-12-14 12:57:07.997576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:08.497 [2024-12-14 12:57:07.997583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.656 ms 00:31:08.497 [2024-12-14 12:57:07.997591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:08.497 [2024-12-14 12:57:07.997633] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:08.497 [2024-12-14 12:57:07.997643] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:12.707 [2024-12-14 12:57:11.918996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:11.919092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:12.707 [2024-12-14 12:57:11.919111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3921.347 ms 00:31:12.707 [2024-12-14 12:57:11.919121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:11.950635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:11.950701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:12.707 [2024-12-14 12:57:11.950715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.261 ms 00:31:12.707 [2024-12-14 12:57:11.950725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:11.950824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:11.950842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:12.707 [2024-12-14 12:57:11.950852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:12.707 [2024-12-14 12:57:11.950861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:11.984317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:11.984351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:12.707 [2024-12-14 12:57:11.984361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.417 ms 00:31:12.707 [2024-12-14 12:57:11.984371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:11.984398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:11.984406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:12.707 [2024-12-14 12:57:11.984414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:12.707 [2024-12-14 12:57:11.984421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:11.984801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:11.984827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:12.707 [2024-12-14 12:57:11.984836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.317 ms 00:31:12.707 [2024-12-14 12:57:11.984843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:11.984887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:11.984895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:12.707 [2024-12-14 12:57:11.984903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:31:12.707 [2024-12-14 12:57:11.984910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:11.999212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:11.999244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:12.707 [2024-12-14 12:57:11.999253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.280 ms 00:31:12.707 [2024-12-14 12:57:11.999260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:12.025587] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:12.707 [2024-12-14 12:57:12.025628] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:12.707 [2024-12-14 12:57:12.025641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:12.025650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:12.707 [2024-12-14 12:57:12.025659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.285 ms 00:31:12.707 [2024-12-14 12:57:12.025666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:12.039571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:12.039605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:12.707 [2024-12-14 12:57:12.039615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.860 ms 00:31:12.707 [2024-12-14 12:57:12.039623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:12.051013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:12.051046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:12.707 [2024-12-14 12:57:12.051064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.348 ms 00:31:12.707 [2024-12-14 12:57:12.051071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:12.062554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:12.062584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:12.707 [2024-12-14 12:57:12.062594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.447 ms 00:31:12.707 [2024-12-14 12:57:12.062601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:12.063223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:12.063248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:12.707 [2024-12-14 12:57:12.063257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.531 ms 00:31:12.707 [2024-12-14 12:57:12.063264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:12.119157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:12.119205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:12.707 [2024-12-14 12:57:12.119218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.874 ms 00:31:12.707 [2024-12-14 12:57:12.119226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:12.129668] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:12.707 [2024-12-14 12:57:12.130451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:12.130484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:12.707 [2024-12-14 12:57:12.130495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.177 ms 00:31:12.707 [2024-12-14 12:57:12.130502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:12.130588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:12.130601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:12.707 [2024-12-14 12:57:12.130610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:12.707 [2024-12-14 12:57:12.130618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:12.130671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:12.130682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:12.707 [2024-12-14 12:57:12.130690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:12.707 [2024-12-14 12:57:12.130697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:12.130717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.707 [2024-12-14 12:57:12.130725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:12.707 [2024-12-14 12:57:12.130736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:12.707 [2024-12-14 12:57:12.130744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.707 [2024-12-14 12:57:12.130778] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:12.707 [2024-12-14 12:57:12.130787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.708 [2024-12-14 12:57:12.130795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:12.708 [2024-12-14 12:57:12.130803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:12.708 [2024-12-14 12:57:12.130811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.708 [2024-12-14 12:57:12.154654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.708 [2024-12-14 12:57:12.154699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:12.708 [2024-12-14 12:57:12.154710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.824 ms 00:31:12.708 [2024-12-14 12:57:12.154718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.708 [2024-12-14 12:57:12.154791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:12.708 [2024-12-14 12:57:12.154800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:12.708 [2024-12-14 12:57:12.154809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:31:12.708 [2024-12-14 12:57:12.154816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:12.708 [2024-12-14 12:57:12.155865] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4185.093 ms, result 0 00:31:12.708 [2024-12-14 12:57:12.171022] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.708 [2024-12-14 12:57:12.187025] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:12.708 [2024-12-14 12:57:12.195195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:13.279 12:57:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:13.279 12:57:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:13.279 12:57:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:13.279 12:57:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:13.279 12:57:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:13.541 [2024-12-14 12:57:13.116023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.541 [2024-12-14 12:57:13.116097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:13.541 [2024-12-14 12:57:13.116118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:13.541 [2024-12-14 12:57:13.116127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.541 [2024-12-14 12:57:13.116170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.541 [2024-12-14 12:57:13.116181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:13.541 [2024-12-14 12:57:13.116191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:13.541 [2024-12-14 12:57:13.116199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.541 [2024-12-14 12:57:13.116220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.541 [2024-12-14 12:57:13.116229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:13.541 [2024-12-14 12:57:13.116240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:13.541 [2024-12-14 12:57:13.116248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.541 [2024-12-14 12:57:13.116314] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.285 ms, result 0 00:31:13.541 true 00:31:13.541 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:13.801 { 00:31:13.801 "name": "ftl", 00:31:13.801 "properties": [ 00:31:13.801 { 00:31:13.801 "name": "superblock_version", 00:31:13.801 "value": 5, 00:31:13.801 "read-only": true 00:31:13.801 }, 00:31:13.801 { 00:31:13.801 "name": "base_device", 00:31:13.801 "bands": [ 00:31:13.801 { 00:31:13.801 "id": 0, 00:31:13.801 "state": "CLOSED", 00:31:13.801 "validity": 1.0 00:31:13.801 }, 00:31:13.801 { 00:31:13.801 "id": 1, 00:31:13.801 "state": "CLOSED", 00:31:13.801 "validity": 1.0 00:31:13.801 }, 00:31:13.801 { 00:31:13.801 "id": 2, 00:31:13.801 "state": "CLOSED", 00:31:13.801 "validity": 0.007843137254901933 00:31:13.801 }, 00:31:13.801 { 00:31:13.801 "id": 3, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 4, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 5, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 6, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 7, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 8, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 9, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 10, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 11, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 12, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 13, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 14, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 15, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 16, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 17, 00:31:13.802 "state": "FREE", 00:31:13.802 "validity": 0.0 00:31:13.802 } 00:31:13.802 ], 00:31:13.802 "read-only": true 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "name": "cache_device", 00:31:13.802 "type": "bdev", 00:31:13.802 "chunks": [ 00:31:13.802 { 00:31:13.802 "id": 0, 00:31:13.802 "state": "INACTIVE", 00:31:13.802 "utilization": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 1, 00:31:13.802 "state": "OPEN", 00:31:13.802 "utilization": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 2, 00:31:13.802 "state": "OPEN", 00:31:13.802 "utilization": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 3, 00:31:13.802 "state": "FREE", 00:31:13.802 "utilization": 0.0 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "id": 4, 00:31:13.802 "state": "FREE", 00:31:13.802 "utilization": 0.0 00:31:13.802 } 00:31:13.802 ], 00:31:13.802 "read-only": true 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "name": "verbose_mode", 00:31:13.802 "value": true, 00:31:13.802 "unit": "", 00:31:13.802 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:13.802 }, 00:31:13.802 { 00:31:13.802 "name": "prep_upgrade_on_shutdown", 00:31:13.802 "value": false, 00:31:13.802 "unit": "", 00:31:13.802 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:13.802 } 00:31:13.802 ] 00:31:13.802 } 00:31:13.802 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:13.802 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:13.802 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:14.063 Validate MD5 checksum, iteration 1 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:14.063 12:57:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:14.324 [2024-12-14 12:57:13.862231] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:14.324 [2024-12-14 12:57:13.862370] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85454 ] 00:31:14.324 [2024-12-14 12:57:14.027924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.584 [2024-12-14 12:57:14.174760] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.498  [2024-12-14T12:57:16.803Z] Copying: 526/1024 [MB] (526 MBps) [2024-12-14T12:57:19.344Z] Copying: 1024/1024 [MB] (average 562 MBps) 00:31:19.607 00:31:19.607 12:57:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:19.607 12:57:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:21.521 12:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:21.521 12:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=62d7002dc13d6804821a7a35f540305e 00:31:21.521 12:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 62d7002dc13d6804821a7a35f540305e != \6\2\d\7\0\0\2\d\c\1\3\d\6\8\0\4\8\2\1\a\7\a\3\5\f\5\4\0\3\0\5\e ]] 00:31:21.521 12:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:21.521 12:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:21.521 Validate MD5 checksum, iteration 2 00:31:21.521 12:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:21.521 12:57:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:21.521 12:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:21.521 12:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:21.521 12:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:21.521 12:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:21.522 12:57:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:21.522 [2024-12-14 12:57:20.964413] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:21.522 [2024-12-14 12:57:20.964525] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85532 ] 00:31:21.522 [2024-12-14 12:57:21.124359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.522 [2024-12-14 12:57:21.218603] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.488  [2024-12-14T12:57:23.796Z] Copying: 584/1024 [MB] (584 MBps) [2024-12-14T12:57:24.731Z] Copying: 1024/1024 [MB] (average 578 MBps) 00:31:24.994 00:31:24.994 12:57:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:24.994 12:57:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=db615604e14dcc6b9dd95d89771532a6 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ db615604e14dcc6b9dd95d89771532a6 != \d\b\6\1\5\6\0\4\e\1\4\d\c\c\6\b\9\d\d\9\5\d\8\9\7\7\1\5\3\2\a\6 ]] 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85361 ]] 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85361 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85591 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85591 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85591 ']' 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:26.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:26.893 12:57:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:26.893 [2024-12-14 12:57:26.509507] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:26.893 [2024-12-14 12:57:26.509626] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85591 ] 00:31:26.893 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 85361 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:27.151 [2024-12-14 12:57:26.662684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:27.151 [2024-12-14 12:57:26.756754] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.721 [2024-12-14 12:57:27.394688] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:27.721 [2024-12-14 12:57:27.394748] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:27.983 [2024-12-14 12:57:27.543419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.983 [2024-12-14 12:57:27.543460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:27.983 [2024-12-14 12:57:27.543471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:27.983 [2024-12-14 12:57:27.543478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.983 [2024-12-14 12:57:27.543523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.983 [2024-12-14 12:57:27.543531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:27.983 [2024-12-14 12:57:27.543537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:27.983 [2024-12-14 12:57:27.543542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.983 [2024-12-14 12:57:27.543560] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:27.983 [2024-12-14 12:57:27.544094] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:27.983 [2024-12-14 12:57:27.544116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.983 [2024-12-14 12:57:27.544123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:27.983 [2024-12-14 12:57:27.544130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.562 ms 00:31:27.983 [2024-12-14 12:57:27.544136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.983 [2024-12-14 12:57:27.544593] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:27.983 [2024-12-14 12:57:27.557163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.983 [2024-12-14 12:57:27.557197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:27.983 [2024-12-14 12:57:27.557208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.572 ms 00:31:27.983 [2024-12-14 12:57:27.557214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.983 [2024-12-14 12:57:27.564097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.983 [2024-12-14 12:57:27.564127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:27.983 [2024-12-14 12:57:27.564136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:31:27.983 [2024-12-14 12:57:27.564142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.983 [2024-12-14 12:57:27.564385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.983 [2024-12-14 12:57:27.564401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:27.983 [2024-12-14 12:57:27.564408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.182 ms 00:31:27.983 [2024-12-14 12:57:27.564414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.983 [2024-12-14 12:57:27.564455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.983 [2024-12-14 12:57:27.564462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:27.983 [2024-12-14 12:57:27.564469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:31:27.983 [2024-12-14 12:57:27.564474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.983 [2024-12-14 12:57:27.564492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.983 [2024-12-14 12:57:27.564499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:27.983 [2024-12-14 12:57:27.564505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:27.983 [2024-12-14 12:57:27.564511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.983 [2024-12-14 12:57:27.564526] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:27.983 [2024-12-14 12:57:27.566811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.983 [2024-12-14 12:57:27.566837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:27.983 [2024-12-14 12:57:27.566845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.289 ms 00:31:27.983 [2024-12-14 12:57:27.566851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.983 [2024-12-14 12:57:27.566873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.983 [2024-12-14 12:57:27.566880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:27.983 [2024-12-14 12:57:27.566887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:27.983 [2024-12-14 12:57:27.566892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.983 [2024-12-14 12:57:27.566908] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:27.983 [2024-12-14 12:57:27.566923] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:27.983 [2024-12-14 12:57:27.566948] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:27.983 [2024-12-14 12:57:27.566961] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:27.983 [2024-12-14 12:57:27.567041] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:27.983 [2024-12-14 12:57:27.567049] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:27.983 [2024-12-14 12:57:27.567068] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:27.983 [2024-12-14 12:57:27.567077] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:27.983 [2024-12-14 12:57:27.567083] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:27.983 [2024-12-14 12:57:27.567093] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:27.983 [2024-12-14 12:57:27.567099] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:27.983 [2024-12-14 12:57:27.567104] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:27.983 [2024-12-14 12:57:27.567110] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:27.983 [2024-12-14 12:57:27.567116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.983 [2024-12-14 12:57:27.567127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:27.983 [2024-12-14 12:57:27.567133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.209 ms 00:31:27.983 [2024-12-14 12:57:27.567138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.983 [2024-12-14 12:57:27.567214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.983 [2024-12-14 12:57:27.567227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:27.983 [2024-12-14 12:57:27.567234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:31:27.983 [2024-12-14 12:57:27.567242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.983 [2024-12-14 12:57:27.567329] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:27.983 [2024-12-14 12:57:27.567337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:27.983 [2024-12-14 12:57:27.567348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:27.983 [2024-12-14 12:57:27.567354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.983 [2024-12-14 12:57:27.567364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:27.983 [2024-12-14 12:57:27.567369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:27.983 [2024-12-14 12:57:27.567374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:27.983 [2024-12-14 12:57:27.567382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:27.983 [2024-12-14 12:57:27.567387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:27.983 [2024-12-14 12:57:27.567392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.983 [2024-12-14 12:57:27.567401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:27.983 [2024-12-14 12:57:27.567407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:27.983 [2024-12-14 12:57:27.567411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.983 [2024-12-14 12:57:27.567417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:27.983 [2024-12-14 12:57:27.567424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:27.983 [2024-12-14 12:57:27.567429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.983 [2024-12-14 12:57:27.567434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:27.983 [2024-12-14 12:57:27.567439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:27.984 [2024-12-14 12:57:27.567443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.984 [2024-12-14 12:57:27.567448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:27.984 [2024-12-14 12:57:27.567454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:27.984 [2024-12-14 12:57:27.567463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:27.984 [2024-12-14 12:57:27.567467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:27.984 [2024-12-14 12:57:27.567472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:27.984 [2024-12-14 12:57:27.567477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:27.984 [2024-12-14 12:57:27.567482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:27.984 [2024-12-14 12:57:27.567488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:27.984 [2024-12-14 12:57:27.567493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:27.984 [2024-12-14 12:57:27.567498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:27.984 [2024-12-14 12:57:27.567503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:27.984 [2024-12-14 12:57:27.567508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:27.984 [2024-12-14 12:57:27.567513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:27.984 [2024-12-14 12:57:27.567518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:27.984 [2024-12-14 12:57:27.567523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.984 [2024-12-14 12:57:27.567528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:27.984 [2024-12-14 12:57:27.567533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:27.984 [2024-12-14 12:57:27.567538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.984 [2024-12-14 12:57:27.567543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:27.984 [2024-12-14 12:57:27.567548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:27.984 [2024-12-14 12:57:27.567552] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.984 [2024-12-14 12:57:27.567557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:27.984 [2024-12-14 12:57:27.567562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:27.984 [2024-12-14 12:57:27.567568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.984 [2024-12-14 12:57:27.567573] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:27.984 [2024-12-14 12:57:27.567579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:27.984 [2024-12-14 12:57:27.567584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:27.984 [2024-12-14 12:57:27.567590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:27.984 [2024-12-14 12:57:27.567595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:27.984 [2024-12-14 12:57:27.567600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:27.984 [2024-12-14 12:57:27.567605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:27.984 [2024-12-14 12:57:27.567610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:27.984 [2024-12-14 12:57:27.567615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:27.984 [2024-12-14 12:57:27.567620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:27.984 [2024-12-14 12:57:27.567626] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:27.984 [2024-12-14 12:57:27.567633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:27.984 [2024-12-14 12:57:27.567639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:27.984 [2024-12-14 12:57:27.567645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:27.984 [2024-12-14 12:57:27.567650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:27.984 [2024-12-14 12:57:27.567655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:27.984 [2024-12-14 12:57:27.567660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:27.984 [2024-12-14 12:57:27.567665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:27.984 [2024-12-14 12:57:27.567671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:27.984 [2024-12-14 12:57:27.567676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:27.984 [2024-12-14 12:57:27.567682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:27.984 [2024-12-14 12:57:27.567687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:27.984 [2024-12-14 12:57:27.567693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:27.984 [2024-12-14 12:57:27.567698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:27.984 [2024-12-14 12:57:27.567703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:27.984 [2024-12-14 12:57:27.567709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:27.984 [2024-12-14 12:57:27.567714] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:27.984 [2024-12-14 12:57:27.567720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:27.984 [2024-12-14 12:57:27.567727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:27.984 [2024-12-14 12:57:27.567733] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:27.984 [2024-12-14 12:57:27.567738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:27.984 [2024-12-14 12:57:27.567747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:27.984 [2024-12-14 12:57:27.567752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.984 [2024-12-14 12:57:27.567758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:27.984 [2024-12-14 12:57:27.567764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.481 ms 00:31:27.984 [2024-12-14 12:57:27.567769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.984 [2024-12-14 12:57:27.587026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.984 [2024-12-14 12:57:27.587052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:27.984 [2024-12-14 12:57:27.587067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.220 ms 00:31:27.984 [2024-12-14 12:57:27.587073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.984 [2024-12-14 12:57:27.587100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.984 [2024-12-14 12:57:27.587106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:27.984 [2024-12-14 12:57:27.587113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:27.984 [2024-12-14 12:57:27.587118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.984 [2024-12-14 12:57:27.610983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.984 [2024-12-14 12:57:27.611010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:27.984 [2024-12-14 12:57:27.611017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.827 ms 00:31:27.984 [2024-12-14 12:57:27.611023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.984 [2024-12-14 12:57:27.611043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.984 [2024-12-14 12:57:27.611049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:27.984 [2024-12-14 12:57:27.611063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:27.984 [2024-12-14 12:57:27.611071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.984 [2024-12-14 12:57:27.611140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.984 [2024-12-14 12:57:27.611148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:27.984 [2024-12-14 12:57:27.611154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:31:27.984 [2024-12-14 12:57:27.611160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.984 [2024-12-14 12:57:27.611190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.984 [2024-12-14 12:57:27.611196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:27.984 [2024-12-14 12:57:27.611202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:27.984 [2024-12-14 12:57:27.611208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.984 [2024-12-14 12:57:27.622706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.984 [2024-12-14 12:57:27.622734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:27.984 [2024-12-14 12:57:27.622742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.483 ms 00:31:27.984 [2024-12-14 12:57:27.622748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.984 [2024-12-14 12:57:27.622823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.984 [2024-12-14 12:57:27.622831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:27.984 [2024-12-14 12:57:27.622838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:27.984 [2024-12-14 12:57:27.622843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.984 [2024-12-14 12:57:27.652524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.984 [2024-12-14 12:57:27.652557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:27.984 [2024-12-14 12:57:27.652567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.667 ms 00:31:27.984 [2024-12-14 12:57:27.652574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.984 [2024-12-14 12:57:27.659565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.984 [2024-12-14 12:57:27.659593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:27.984 [2024-12-14 12:57:27.659607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.386 ms 00:31:27.984 [2024-12-14 12:57:27.659613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.984 [2024-12-14 12:57:27.702949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.984 [2024-12-14 12:57:27.702992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:27.985 [2024-12-14 12:57:27.703003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.294 ms 00:31:27.985 [2024-12-14 12:57:27.703009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.985 [2024-12-14 12:57:27.703120] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:27.985 [2024-12-14 12:57:27.703195] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:27.985 [2024-12-14 12:57:27.703268] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:27.985 [2024-12-14 12:57:27.703341] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:27.985 [2024-12-14 12:57:27.703348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.985 [2024-12-14 12:57:27.703355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:27.985 [2024-12-14 12:57:27.703362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.306 ms 00:31:27.985 [2024-12-14 12:57:27.703368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.985 [2024-12-14 12:57:27.703411] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:27.985 [2024-12-14 12:57:27.703421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.985 [2024-12-14 12:57:27.703430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:27.985 [2024-12-14 12:57:27.703436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:27.985 [2024-12-14 12:57:27.703442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:27.985 [2024-12-14 12:57:27.714492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:27.985 [2024-12-14 12:57:27.714526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:27.985 [2024-12-14 12:57:27.714534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.034 ms 00:31:27.985 [2024-12-14 12:57:27.714541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.245 [2024-12-14 12:57:27.720938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.245 [2024-12-14 12:57:27.720967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:28.245 [2024-12-14 12:57:27.720974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:28.245 [2024-12-14 12:57:27.720980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.245 [2024-12-14 12:57:27.721043] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:28.245 [2024-12-14 12:57:27.721167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.245 [2024-12-14 12:57:27.721183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:28.245 [2024-12-14 12:57:27.721190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.126 ms 00:31:28.245 [2024-12-14 12:57:27.721196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.816 [2024-12-14 12:57:28.255115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.817 [2024-12-14 12:57:28.255178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:28.817 [2024-12-14 12:57:28.255192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 533.245 ms 00:31:28.817 [2024-12-14 12:57:28.255202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.817 [2024-12-14 12:57:28.259607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.817 [2024-12-14 12:57:28.259645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:28.817 [2024-12-14 12:57:28.259656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.389 ms 00:31:28.817 [2024-12-14 12:57:28.259664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.817 [2024-12-14 12:57:28.260602] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:28.817 [2024-12-14 12:57:28.260635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.817 [2024-12-14 12:57:28.260643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:28.817 [2024-12-14 12:57:28.260653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.938 ms 00:31:28.817 [2024-12-14 12:57:28.260661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.817 [2024-12-14 12:57:28.260693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.817 [2024-12-14 12:57:28.260702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:28.817 [2024-12-14 12:57:28.260711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:28.817 [2024-12-14 12:57:28.260723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.817 [2024-12-14 12:57:28.260757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 539.708 ms, result 0 00:31:28.817 [2024-12-14 12:57:28.260795] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:28.817 [2024-12-14 12:57:28.260870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.817 [2024-12-14 12:57:28.260880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:28.817 [2024-12-14 12:57:28.260888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:31:28.817 [2024-12-14 12:57:28.260896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:28.999837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.388 [2024-12-14 12:57:28.999915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:29.388 [2024-12-14 12:57:28.999948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 737.914 ms 00:31:29.388 [2024-12-14 12:57:28.999958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:29.004702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.388 [2024-12-14 12:57:29.004746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:29.388 [2024-12-14 12:57:29.004757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.597 ms 00:31:29.388 [2024-12-14 12:57:29.004765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:29.005924] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:29.388 [2024-12-14 12:57:29.005968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.388 [2024-12-14 12:57:29.005977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:29.388 [2024-12-14 12:57:29.005986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.169 ms 00:31:29.388 [2024-12-14 12:57:29.005994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:29.006032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.388 [2024-12-14 12:57:29.006041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:29.388 [2024-12-14 12:57:29.006050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:29.388 [2024-12-14 12:57:29.006075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:29.006116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 745.310 ms, result 0 00:31:29.388 [2024-12-14 12:57:29.006165] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:29.388 [2024-12-14 12:57:29.006177] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:29.388 [2024-12-14 12:57:29.006188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.388 [2024-12-14 12:57:29.006197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:29.388 [2024-12-14 12:57:29.006206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1285.158 ms 00:31:29.388 [2024-12-14 12:57:29.006214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:29.006245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.388 [2024-12-14 12:57:29.006259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:29.388 [2024-12-14 12:57:29.006268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:29.388 [2024-12-14 12:57:29.006276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:29.018813] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:29.388 [2024-12-14 12:57:29.018941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.388 [2024-12-14 12:57:29.018953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:29.388 [2024-12-14 12:57:29.018964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.645 ms 00:31:29.388 [2024-12-14 12:57:29.018973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:29.019713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.388 [2024-12-14 12:57:29.019736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:29.388 [2024-12-14 12:57:29.019749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.627 ms 00:31:29.388 [2024-12-14 12:57:29.019758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:29.022030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.388 [2024-12-14 12:57:29.022051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:29.388 [2024-12-14 12:57:29.022071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.254 ms 00:31:29.388 [2024-12-14 12:57:29.022079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:29.022125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.388 [2024-12-14 12:57:29.022135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:29.388 [2024-12-14 12:57:29.022143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:29.388 [2024-12-14 12:57:29.022156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:29.022269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.388 [2024-12-14 12:57:29.022279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:29.388 [2024-12-14 12:57:29.022288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:31:29.388 [2024-12-14 12:57:29.022295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:29.022317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.388 [2024-12-14 12:57:29.022325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:29.388 [2024-12-14 12:57:29.022334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:29.388 [2024-12-14 12:57:29.022342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:29.022382] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:29.388 [2024-12-14 12:57:29.022399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.388 [2024-12-14 12:57:29.022408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:29.388 [2024-12-14 12:57:29.022417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:31:29.388 [2024-12-14 12:57:29.022425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:29.022476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:29.388 [2024-12-14 12:57:29.022485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:29.388 [2024-12-14 12:57:29.022493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:31:29.388 [2024-12-14 12:57:29.022501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:29.388 [2024-12-14 12:57:29.023749] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1479.812 ms, result 0 00:31:29.388 [2024-12-14 12:57:29.039417] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:29.388 [2024-12-14 12:57:29.055430] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:29.388 [2024-12-14 12:57:29.064382] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:29.388 Validate MD5 checksum, iteration 1 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:29.388 12:57:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:29.649 [2024-12-14 12:57:29.164974] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:29.649 [2024-12-14 12:57:29.165135] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85626 ] 00:31:29.649 [2024-12-14 12:57:29.331212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.909 [2024-12-14 12:57:29.435024] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:31.293  [2024-12-14T12:57:31.601Z] Copying: 706/1024 [MB] (706 MBps) [2024-12-14T12:57:32.984Z] Copying: 1024/1024 [MB] (average 699 MBps) 00:31:33.247 00:31:33.247 12:57:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:33.247 12:57:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:35.160 12:57:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:35.160 Validate MD5 checksum, iteration 2 00:31:35.160 12:57:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=62d7002dc13d6804821a7a35f540305e 00:31:35.160 12:57:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 62d7002dc13d6804821a7a35f540305e != \6\2\d\7\0\0\2\d\c\1\3\d\6\8\0\4\8\2\1\a\7\a\3\5\f\5\4\0\3\0\5\e ]] 00:31:35.160 12:57:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:35.160 12:57:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:35.160 12:57:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:35.160 12:57:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:35.160 12:57:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:35.160 12:57:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:35.160 12:57:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:35.160 12:57:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:35.160 12:57:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:35.160 [2024-12-14 12:57:34.788942] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:35.160 [2024-12-14 12:57:34.789073] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85687 ] 00:31:35.421 [2024-12-14 12:57:34.944086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.421 [2024-12-14 12:57:35.019443] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.809  [2024-12-14T12:57:37.492Z] Copying: 597/1024 [MB] (597 MBps) [2024-12-14T12:57:40.796Z] Copying: 1024/1024 [MB] (average 546 MBps) 00:31:41.059 00:31:41.059 12:57:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:41.059 12:57:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:42.443 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:42.443 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=db615604e14dcc6b9dd95d89771532a6 00:31:42.443 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ db615604e14dcc6b9dd95d89771532a6 != \d\b\6\1\5\6\0\4\e\1\4\d\c\c\6\b\9\d\d\9\5\d\8\9\7\7\1\5\3\2\a\6 ]] 00:31:42.443 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:42.443 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:42.443 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:42.443 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:42.443 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:42.443 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85591 ]] 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85591 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85591 ']' 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85591 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85591 00:31:42.704 killing process with pid 85591 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85591' 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85591 00:31:42.704 12:57:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85591 00:31:43.277 [2024-12-14 12:57:42.743579] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:43.277 [2024-12-14 12:57:42.755337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.277 [2024-12-14 12:57:42.755374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:43.277 [2024-12-14 12:57:42.755385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:43.277 [2024-12-14 12:57:42.755391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.277 [2024-12-14 12:57:42.755408] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:43.277 [2024-12-14 12:57:42.757498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.277 [2024-12-14 12:57:42.757523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:43.277 [2024-12-14 12:57:42.757535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.080 ms 00:31:43.277 [2024-12-14 12:57:42.757541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.277 [2024-12-14 12:57:42.757718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.277 [2024-12-14 12:57:42.757732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:43.277 [2024-12-14 12:57:42.757739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.161 ms 00:31:43.277 [2024-12-14 12:57:42.757745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.277 [2024-12-14 12:57:42.758848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.277 [2024-12-14 12:57:42.758871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:43.277 [2024-12-14 12:57:42.758878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.092 ms 00:31:43.277 [2024-12-14 12:57:42.758887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.277 [2024-12-14 12:57:42.759746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.277 [2024-12-14 12:57:42.759765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:43.277 [2024-12-14 12:57:42.759773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.837 ms 00:31:43.277 [2024-12-14 12:57:42.759779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.277 [2024-12-14 12:57:42.767022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.277 [2024-12-14 12:57:42.767052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:43.277 [2024-12-14 12:57:42.767066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.218 ms 00:31:43.277 [2024-12-14 12:57:42.767076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.277 [2024-12-14 12:57:42.771084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.277 [2024-12-14 12:57:42.771111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:43.277 [2024-12-14 12:57:42.771119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.981 ms 00:31:43.277 [2024-12-14 12:57:42.771126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.277 [2024-12-14 12:57:42.771190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.277 [2024-12-14 12:57:42.771199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:43.277 [2024-12-14 12:57:42.771205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:31:43.277 [2024-12-14 12:57:42.771214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.277 [2024-12-14 12:57:42.778395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.277 [2024-12-14 12:57:42.778422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:43.277 [2024-12-14 12:57:42.778429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.169 ms 00:31:43.277 [2024-12-14 12:57:42.778434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.277 [2024-12-14 12:57:42.785289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.277 [2024-12-14 12:57:42.785316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:43.277 [2024-12-14 12:57:42.785323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.831 ms 00:31:43.277 [2024-12-14 12:57:42.785328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.277 [2024-12-14 12:57:42.792395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.277 [2024-12-14 12:57:42.792419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:43.277 [2024-12-14 12:57:42.792426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.043 ms 00:31:43.277 [2024-12-14 12:57:42.792432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.277 [2024-12-14 12:57:42.799357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.277 [2024-12-14 12:57:42.799381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:43.278 [2024-12-14 12:57:42.799388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.882 ms 00:31:43.278 [2024-12-14 12:57:42.799393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.799416] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:43.278 [2024-12-14 12:57:42.799427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:43.278 [2024-12-14 12:57:42.799434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:43.278 [2024-12-14 12:57:42.799441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:43.278 [2024-12-14 12:57:42.799447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:43.278 [2024-12-14 12:57:42.799534] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:43.278 [2024-12-14 12:57:42.799539] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 49f72a2b-0888-4059-a320-156546a3e0aa 00:31:43.278 [2024-12-14 12:57:42.799545] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:43.278 [2024-12-14 12:57:42.799551] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:43.278 [2024-12-14 12:57:42.799556] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:43.278 [2024-12-14 12:57:42.799562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:43.278 [2024-12-14 12:57:42.799567] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:43.278 [2024-12-14 12:57:42.799573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:43.278 [2024-12-14 12:57:42.799581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:43.278 [2024-12-14 12:57:42.799586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:43.278 [2024-12-14 12:57:42.799591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:43.278 [2024-12-14 12:57:42.799596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.278 [2024-12-14 12:57:42.799601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:43.278 [2024-12-14 12:57:42.799609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.180 ms 00:31:43.278 [2024-12-14 12:57:42.799615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.809198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.278 [2024-12-14 12:57:42.809224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:43.278 [2024-12-14 12:57:42.809231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.570 ms 00:31:43.278 [2024-12-14 12:57:42.809237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.809519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:43.278 [2024-12-14 12:57:42.809534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:43.278 [2024-12-14 12:57:42.809541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.264 ms 00:31:43.278 [2024-12-14 12:57:42.809546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.842506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.278 [2024-12-14 12:57:42.842534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:43.278 [2024-12-14 12:57:42.842542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.278 [2024-12-14 12:57:42.842552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.842577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.278 [2024-12-14 12:57:42.842583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:43.278 [2024-12-14 12:57:42.842588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.278 [2024-12-14 12:57:42.842594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.842655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.278 [2024-12-14 12:57:42.842663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:43.278 [2024-12-14 12:57:42.842669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.278 [2024-12-14 12:57:42.842674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.842689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.278 [2024-12-14 12:57:42.842695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:43.278 [2024-12-14 12:57:42.842701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.278 [2024-12-14 12:57:42.842707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.901798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.278 [2024-12-14 12:57:42.901829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:43.278 [2024-12-14 12:57:42.901837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.278 [2024-12-14 12:57:42.901843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.950590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.278 [2024-12-14 12:57:42.950623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:43.278 [2024-12-14 12:57:42.950631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.278 [2024-12-14 12:57:42.950638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.950686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.278 [2024-12-14 12:57:42.950694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:43.278 [2024-12-14 12:57:42.950700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.278 [2024-12-14 12:57:42.950706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.950747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.278 [2024-12-14 12:57:42.950763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:43.278 [2024-12-14 12:57:42.950770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.278 [2024-12-14 12:57:42.950775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.950844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.278 [2024-12-14 12:57:42.950851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:43.278 [2024-12-14 12:57:42.950856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.278 [2024-12-14 12:57:42.950862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.950885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.278 [2024-12-14 12:57:42.950892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:43.278 [2024-12-14 12:57:42.950900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.278 [2024-12-14 12:57:42.950906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.950932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.278 [2024-12-14 12:57:42.950939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:43.278 [2024-12-14 12:57:42.950944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.278 [2024-12-14 12:57:42.950950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.950982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:43.278 [2024-12-14 12:57:42.950991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:43.278 [2024-12-14 12:57:42.950996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:43.278 [2024-12-14 12:57:42.951002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:43.278 [2024-12-14 12:57:42.951107] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 195.732 ms, result 0 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:48.571 Remove shared memory files 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85361 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:48.571 00:31:48.571 real 1m28.864s 00:31:48.571 user 1m59.860s 00:31:48.571 sys 0m20.013s 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:48.571 ************************************ 00:31:48.571 12:57:48 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:48.571 END TEST ftl_upgrade_shutdown 00:31:48.571 ************************************ 00:31:48.571 12:57:48 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:31:48.571 12:57:48 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:48.571 12:57:48 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:31:48.571 12:57:48 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:48.571 12:57:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:48.571 ************************************ 00:31:48.571 START TEST ftl_restore_fast 00:31:48.571 ************************************ 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:31:48.571 * Looking for test storage... 00:31:48.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:48.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.571 --rc genhtml_branch_coverage=1 00:31:48.571 --rc genhtml_function_coverage=1 00:31:48.571 --rc genhtml_legend=1 00:31:48.571 --rc geninfo_all_blocks=1 00:31:48.571 --rc geninfo_unexecuted_blocks=1 00:31:48.571 00:31:48.571 ' 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:48.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.571 --rc genhtml_branch_coverage=1 00:31:48.571 --rc genhtml_function_coverage=1 00:31:48.571 --rc genhtml_legend=1 00:31:48.571 --rc geninfo_all_blocks=1 00:31:48.571 --rc geninfo_unexecuted_blocks=1 00:31:48.571 00:31:48.571 ' 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:48.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.571 --rc genhtml_branch_coverage=1 00:31:48.571 --rc genhtml_function_coverage=1 00:31:48.571 --rc genhtml_legend=1 00:31:48.571 --rc geninfo_all_blocks=1 00:31:48.571 --rc geninfo_unexecuted_blocks=1 00:31:48.571 00:31:48.571 ' 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:48.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:48.571 --rc genhtml_branch_coverage=1 00:31:48.571 --rc genhtml_function_coverage=1 00:31:48.571 --rc genhtml_legend=1 00:31:48.571 --rc geninfo_all_blocks=1 00:31:48.571 --rc geninfo_unexecuted_blocks=1 00:31:48.571 00:31:48.571 ' 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:48.571 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.FpXaWQYIpj 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=85867 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 85867 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 85867 ']' 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:48.572 12:57:48 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:48.833 [2024-12-14 12:57:48.394947] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:48.833 [2024-12-14 12:57:48.395126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85867 ] 00:31:48.833 [2024-12-14 12:57:48.559376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.094 [2024-12-14 12:57:48.645960] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:49.666 12:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.666 12:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:31:49.666 12:57:49 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:49.666 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:49.666 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:49.666 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:49.666 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:49.666 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:49.927 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:49.927 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:49.927 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:49.927 12:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:49.927 12:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:49.927 12:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:49.927 12:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:49.927 12:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:50.188 12:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:50.188 { 00:31:50.188 "name": "nvme0n1", 00:31:50.188 "aliases": [ 00:31:50.188 "464c2889-bed4-43d7-b4e3-1127089fb2fc" 00:31:50.188 ], 00:31:50.188 "product_name": "NVMe disk", 00:31:50.188 "block_size": 4096, 00:31:50.188 "num_blocks": 1310720, 00:31:50.188 "uuid": "464c2889-bed4-43d7-b4e3-1127089fb2fc", 00:31:50.188 "numa_id": -1, 00:31:50.188 "assigned_rate_limits": { 00:31:50.188 "rw_ios_per_sec": 0, 00:31:50.188 "rw_mbytes_per_sec": 0, 00:31:50.188 "r_mbytes_per_sec": 0, 00:31:50.188 "w_mbytes_per_sec": 0 00:31:50.188 }, 00:31:50.188 "claimed": true, 00:31:50.188 "claim_type": "read_many_write_one", 00:31:50.188 "zoned": false, 00:31:50.188 "supported_io_types": { 00:31:50.188 "read": true, 00:31:50.188 "write": true, 00:31:50.188 "unmap": true, 00:31:50.188 "flush": true, 00:31:50.188 "reset": true, 00:31:50.188 "nvme_admin": true, 00:31:50.188 "nvme_io": true, 00:31:50.188 "nvme_io_md": false, 00:31:50.188 "write_zeroes": true, 00:31:50.188 "zcopy": false, 00:31:50.188 "get_zone_info": false, 00:31:50.188 "zone_management": false, 00:31:50.188 "zone_append": false, 00:31:50.188 "compare": true, 00:31:50.188 "compare_and_write": false, 00:31:50.188 "abort": true, 00:31:50.188 "seek_hole": false, 00:31:50.188 "seek_data": false, 00:31:50.188 "copy": true, 00:31:50.188 "nvme_iov_md": false 00:31:50.188 }, 00:31:50.188 "driver_specific": { 00:31:50.188 "nvme": [ 00:31:50.188 { 00:31:50.188 "pci_address": "0000:00:11.0", 00:31:50.188 "trid": { 00:31:50.188 "trtype": "PCIe", 00:31:50.188 "traddr": "0000:00:11.0" 00:31:50.188 }, 00:31:50.188 "ctrlr_data": { 00:31:50.188 "cntlid": 0, 00:31:50.188 "vendor_id": "0x1b36", 00:31:50.188 "model_number": "QEMU NVMe Ctrl", 00:31:50.188 "serial_number": "12341", 00:31:50.188 "firmware_revision": "8.0.0", 00:31:50.188 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:50.188 "oacs": { 00:31:50.188 "security": 0, 00:31:50.188 "format": 1, 00:31:50.188 "firmware": 0, 00:31:50.188 "ns_manage": 1 00:31:50.188 }, 00:31:50.188 "multi_ctrlr": false, 00:31:50.188 "ana_reporting": false 00:31:50.188 }, 00:31:50.188 "vs": { 00:31:50.188 "nvme_version": "1.4" 00:31:50.188 }, 00:31:50.188 "ns_data": { 00:31:50.188 "id": 1, 00:31:50.188 "can_share": false 00:31:50.188 } 00:31:50.188 } 00:31:50.188 ], 00:31:50.188 "mp_policy": "active_passive" 00:31:50.188 } 00:31:50.188 } 00:31:50.188 ]' 00:31:50.188 12:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:50.188 12:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:50.188 12:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:50.188 12:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:50.188 12:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:50.188 12:57:49 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:31:50.188 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:50.188 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:50.188 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:50.188 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:50.188 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:50.449 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=3e7250b4-b3ef-4129-ae85-3da469eade75 00:31:50.449 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:50.449 12:57:49 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3e7250b4-b3ef-4129-ae85-3da469eade75 00:31:50.449 12:57:50 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:50.710 12:57:50 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=6e399793-4cb3-4b41-986d-7482a291e4bd 00:31:50.710 12:57:50 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6e399793-4cb3-4b41-986d-7482a291e4bd 00:31:50.971 12:57:50 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=6c4e63f0-de46-4ad8-a485-cfa78357d4fe 00:31:50.971 12:57:50 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:50.971 12:57:50 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6c4e63f0-de46-4ad8-a485-cfa78357d4fe 00:31:50.971 12:57:50 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:50.971 12:57:50 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:50.971 12:57:50 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=6c4e63f0-de46-4ad8-a485-cfa78357d4fe 00:31:50.971 12:57:50 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:50.971 12:57:50 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 6c4e63f0-de46-4ad8-a485-cfa78357d4fe 00:31:50.971 12:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6c4e63f0-de46-4ad8-a485-cfa78357d4fe 00:31:50.971 12:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:50.971 12:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:50.971 12:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:50.971 12:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6c4e63f0-de46-4ad8-a485-cfa78357d4fe 00:31:51.232 12:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:51.232 { 00:31:51.232 "name": "6c4e63f0-de46-4ad8-a485-cfa78357d4fe", 00:31:51.232 "aliases": [ 00:31:51.232 "lvs/nvme0n1p0" 00:31:51.232 ], 00:31:51.232 "product_name": "Logical Volume", 00:31:51.232 "block_size": 4096, 00:31:51.232 "num_blocks": 26476544, 00:31:51.232 "uuid": "6c4e63f0-de46-4ad8-a485-cfa78357d4fe", 00:31:51.232 "assigned_rate_limits": { 00:31:51.232 "rw_ios_per_sec": 0, 00:31:51.232 "rw_mbytes_per_sec": 0, 00:31:51.232 "r_mbytes_per_sec": 0, 00:31:51.232 "w_mbytes_per_sec": 0 00:31:51.232 }, 00:31:51.232 "claimed": false, 00:31:51.232 "zoned": false, 00:31:51.232 "supported_io_types": { 00:31:51.232 "read": true, 00:31:51.232 "write": true, 00:31:51.232 "unmap": true, 00:31:51.232 "flush": false, 00:31:51.232 "reset": true, 00:31:51.232 "nvme_admin": false, 00:31:51.232 "nvme_io": false, 00:31:51.232 "nvme_io_md": false, 00:31:51.232 "write_zeroes": true, 00:31:51.232 "zcopy": false, 00:31:51.232 "get_zone_info": false, 00:31:51.232 "zone_management": false, 00:31:51.232 "zone_append": false, 00:31:51.232 "compare": false, 00:31:51.232 "compare_and_write": false, 00:31:51.232 "abort": false, 00:31:51.232 "seek_hole": true, 00:31:51.232 "seek_data": true, 00:31:51.232 "copy": false, 00:31:51.232 "nvme_iov_md": false 00:31:51.232 }, 00:31:51.232 "driver_specific": { 00:31:51.232 "lvol": { 00:31:51.232 "lvol_store_uuid": "6e399793-4cb3-4b41-986d-7482a291e4bd", 00:31:51.232 "base_bdev": "nvme0n1", 00:31:51.232 "thin_provision": true, 00:31:51.232 "num_allocated_clusters": 0, 00:31:51.232 "snapshot": false, 00:31:51.232 "clone": false, 00:31:51.232 "esnap_clone": false 00:31:51.232 } 00:31:51.232 } 00:31:51.232 } 00:31:51.232 ]' 00:31:51.232 12:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:51.232 12:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:51.232 12:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:51.232 12:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:51.232 12:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:51.232 12:57:50 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:51.232 12:57:50 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:51.232 12:57:50 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:51.232 12:57:50 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:51.496 12:57:51 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:51.496 12:57:51 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:51.496 12:57:51 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 6c4e63f0-de46-4ad8-a485-cfa78357d4fe 00:31:51.496 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6c4e63f0-de46-4ad8-a485-cfa78357d4fe 00:31:51.496 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:51.496 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:51.496 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:51.496 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6c4e63f0-de46-4ad8-a485-cfa78357d4fe 00:31:51.785 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:51.785 { 00:31:51.785 "name": "6c4e63f0-de46-4ad8-a485-cfa78357d4fe", 00:31:51.785 "aliases": [ 00:31:51.785 "lvs/nvme0n1p0" 00:31:51.785 ], 00:31:51.785 "product_name": "Logical Volume", 00:31:51.785 "block_size": 4096, 00:31:51.785 "num_blocks": 26476544, 00:31:51.785 "uuid": "6c4e63f0-de46-4ad8-a485-cfa78357d4fe", 00:31:51.785 "assigned_rate_limits": { 00:31:51.785 "rw_ios_per_sec": 0, 00:31:51.785 "rw_mbytes_per_sec": 0, 00:31:51.785 "r_mbytes_per_sec": 0, 00:31:51.785 "w_mbytes_per_sec": 0 00:31:51.785 }, 00:31:51.785 "claimed": false, 00:31:51.785 "zoned": false, 00:31:51.785 "supported_io_types": { 00:31:51.785 "read": true, 00:31:51.785 "write": true, 00:31:51.785 "unmap": true, 00:31:51.785 "flush": false, 00:31:51.785 "reset": true, 00:31:51.785 "nvme_admin": false, 00:31:51.785 "nvme_io": false, 00:31:51.785 "nvme_io_md": false, 00:31:51.785 "write_zeroes": true, 00:31:51.785 "zcopy": false, 00:31:51.785 "get_zone_info": false, 00:31:51.785 "zone_management": false, 00:31:51.785 "zone_append": false, 00:31:51.785 "compare": false, 00:31:51.785 "compare_and_write": false, 00:31:51.785 "abort": false, 00:31:51.785 "seek_hole": true, 00:31:51.785 "seek_data": true, 00:31:51.785 "copy": false, 00:31:51.785 "nvme_iov_md": false 00:31:51.785 }, 00:31:51.785 "driver_specific": { 00:31:51.785 "lvol": { 00:31:51.785 "lvol_store_uuid": "6e399793-4cb3-4b41-986d-7482a291e4bd", 00:31:51.785 "base_bdev": "nvme0n1", 00:31:51.785 "thin_provision": true, 00:31:51.785 "num_allocated_clusters": 0, 00:31:51.785 "snapshot": false, 00:31:51.785 "clone": false, 00:31:51.785 "esnap_clone": false 00:31:51.785 } 00:31:51.785 } 00:31:51.785 } 00:31:51.785 ]' 00:31:51.785 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:51.785 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:51.785 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:51.785 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:51.785 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:51.785 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:51.785 12:57:51 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:51.785 12:57:51 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:52.049 12:57:51 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:52.049 12:57:51 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 6c4e63f0-de46-4ad8-a485-cfa78357d4fe 00:31:52.049 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=6c4e63f0-de46-4ad8-a485-cfa78357d4fe 00:31:52.049 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:52.049 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:52.049 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:52.049 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6c4e63f0-de46-4ad8-a485-cfa78357d4fe 00:31:52.310 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:52.310 { 00:31:52.310 "name": "6c4e63f0-de46-4ad8-a485-cfa78357d4fe", 00:31:52.310 "aliases": [ 00:31:52.310 "lvs/nvme0n1p0" 00:31:52.310 ], 00:31:52.310 "product_name": "Logical Volume", 00:31:52.310 "block_size": 4096, 00:31:52.310 "num_blocks": 26476544, 00:31:52.310 "uuid": "6c4e63f0-de46-4ad8-a485-cfa78357d4fe", 00:31:52.310 "assigned_rate_limits": { 00:31:52.310 "rw_ios_per_sec": 0, 00:31:52.310 "rw_mbytes_per_sec": 0, 00:31:52.310 "r_mbytes_per_sec": 0, 00:31:52.310 "w_mbytes_per_sec": 0 00:31:52.310 }, 00:31:52.310 "claimed": false, 00:31:52.310 "zoned": false, 00:31:52.310 "supported_io_types": { 00:31:52.310 "read": true, 00:31:52.310 "write": true, 00:31:52.310 "unmap": true, 00:31:52.310 "flush": false, 00:31:52.310 "reset": true, 00:31:52.310 "nvme_admin": false, 00:31:52.310 "nvme_io": false, 00:31:52.310 "nvme_io_md": false, 00:31:52.310 "write_zeroes": true, 00:31:52.310 "zcopy": false, 00:31:52.310 "get_zone_info": false, 00:31:52.310 "zone_management": false, 00:31:52.310 "zone_append": false, 00:31:52.310 "compare": false, 00:31:52.310 "compare_and_write": false, 00:31:52.310 "abort": false, 00:31:52.310 "seek_hole": true, 00:31:52.310 "seek_data": true, 00:31:52.310 "copy": false, 00:31:52.310 "nvme_iov_md": false 00:31:52.310 }, 00:31:52.310 "driver_specific": { 00:31:52.310 "lvol": { 00:31:52.310 "lvol_store_uuid": "6e399793-4cb3-4b41-986d-7482a291e4bd", 00:31:52.310 "base_bdev": "nvme0n1", 00:31:52.310 "thin_provision": true, 00:31:52.310 "num_allocated_clusters": 0, 00:31:52.310 "snapshot": false, 00:31:52.310 "clone": false, 00:31:52.311 "esnap_clone": false 00:31:52.311 } 00:31:52.311 } 00:31:52.311 } 00:31:52.311 ]' 00:31:52.311 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:52.311 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:52.311 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:52.311 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:52.311 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:52.311 12:57:51 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:52.311 12:57:51 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:52.311 12:57:51 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 6c4e63f0-de46-4ad8-a485-cfa78357d4fe --l2p_dram_limit 10' 00:31:52.311 12:57:51 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:52.311 12:57:51 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:52.311 12:57:51 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:52.311 12:57:51 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:52.311 12:57:51 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:52.311 12:57:51 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6c4e63f0-de46-4ad8-a485-cfa78357d4fe --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:52.573 [2024-12-14 12:57:52.057891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.573 [2024-12-14 12:57:52.057932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:52.573 [2024-12-14 12:57:52.057944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:52.573 [2024-12-14 12:57:52.057951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.573 [2024-12-14 12:57:52.057999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.573 [2024-12-14 12:57:52.058006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:52.573 [2024-12-14 12:57:52.058014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:52.573 [2024-12-14 12:57:52.058020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.573 [2024-12-14 12:57:52.058039] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:52.573 [2024-12-14 12:57:52.058805] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:52.573 [2024-12-14 12:57:52.058830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.573 [2024-12-14 12:57:52.058837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:52.573 [2024-12-14 12:57:52.058847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:31:52.573 [2024-12-14 12:57:52.058853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.573 [2024-12-14 12:57:52.058877] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a4449f4f-a8d5-417d-b7f9-195ffc128a42 00:31:52.573 [2024-12-14 12:57:52.059863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.573 [2024-12-14 12:57:52.059894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:52.573 [2024-12-14 12:57:52.059902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:52.573 [2024-12-14 12:57:52.059910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.573 [2024-12-14 12:57:52.064645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.573 [2024-12-14 12:57:52.064678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:52.573 [2024-12-14 12:57:52.064685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.671 ms 00:31:52.573 [2024-12-14 12:57:52.064692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.573 [2024-12-14 12:57:52.064758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.573 [2024-12-14 12:57:52.064767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:52.573 [2024-12-14 12:57:52.064774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:52.573 [2024-12-14 12:57:52.064784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.573 [2024-12-14 12:57:52.064819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.573 [2024-12-14 12:57:52.064828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:52.573 [2024-12-14 12:57:52.064834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:52.573 [2024-12-14 12:57:52.064843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.573 [2024-12-14 12:57:52.064858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:52.573 [2024-12-14 12:57:52.067723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.573 [2024-12-14 12:57:52.067750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:52.573 [2024-12-14 12:57:52.067759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.868 ms 00:31:52.573 [2024-12-14 12:57:52.067765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.573 [2024-12-14 12:57:52.067792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.573 [2024-12-14 12:57:52.067799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:52.573 [2024-12-14 12:57:52.067806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:52.573 [2024-12-14 12:57:52.067812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.573 [2024-12-14 12:57:52.067826] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:52.573 [2024-12-14 12:57:52.067934] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:52.573 [2024-12-14 12:57:52.067946] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:52.573 [2024-12-14 12:57:52.067954] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:52.574 [2024-12-14 12:57:52.067964] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:52.574 [2024-12-14 12:57:52.067972] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:52.574 [2024-12-14 12:57:52.067979] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:52.574 [2024-12-14 12:57:52.067984] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:52.574 [2024-12-14 12:57:52.067994] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:52.574 [2024-12-14 12:57:52.067999] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:52.574 [2024-12-14 12:57:52.068007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.574 [2024-12-14 12:57:52.068017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:52.574 [2024-12-14 12:57:52.068025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:31:52.574 [2024-12-14 12:57:52.068031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.574 [2024-12-14 12:57:52.068107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.574 [2024-12-14 12:57:52.068114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:52.574 [2024-12-14 12:57:52.068121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:31:52.574 [2024-12-14 12:57:52.068127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.574 [2024-12-14 12:57:52.068202] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:52.574 [2024-12-14 12:57:52.068209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:52.574 [2024-12-14 12:57:52.068217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:52.574 [2024-12-14 12:57:52.068223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.574 [2024-12-14 12:57:52.068230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:52.574 [2024-12-14 12:57:52.068235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:52.574 [2024-12-14 12:57:52.068242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:52.574 [2024-12-14 12:57:52.068247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:52.574 [2024-12-14 12:57:52.068254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:52.574 [2024-12-14 12:57:52.068259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:52.574 [2024-12-14 12:57:52.068265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:52.574 [2024-12-14 12:57:52.068270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:52.574 [2024-12-14 12:57:52.068277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:52.574 [2024-12-14 12:57:52.068282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:52.574 [2024-12-14 12:57:52.068289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:52.574 [2024-12-14 12:57:52.068294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.574 [2024-12-14 12:57:52.068304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:52.574 [2024-12-14 12:57:52.068309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:52.574 [2024-12-14 12:57:52.068315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.574 [2024-12-14 12:57:52.068321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:52.574 [2024-12-14 12:57:52.068327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:52.574 [2024-12-14 12:57:52.068332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:52.574 [2024-12-14 12:57:52.068339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:52.574 [2024-12-14 12:57:52.068344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:52.574 [2024-12-14 12:57:52.068350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:52.574 [2024-12-14 12:57:52.068355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:52.574 [2024-12-14 12:57:52.068361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:52.574 [2024-12-14 12:57:52.068366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:52.574 [2024-12-14 12:57:52.068372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:52.574 [2024-12-14 12:57:52.068377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:52.574 [2024-12-14 12:57:52.068383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:52.574 [2024-12-14 12:57:52.068388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:52.574 [2024-12-14 12:57:52.068396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:52.574 [2024-12-14 12:57:52.068401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:52.574 [2024-12-14 12:57:52.068407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:52.574 [2024-12-14 12:57:52.068412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:52.574 [2024-12-14 12:57:52.068418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:52.574 [2024-12-14 12:57:52.068423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:52.574 [2024-12-14 12:57:52.068430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:52.574 [2024-12-14 12:57:52.068435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.574 [2024-12-14 12:57:52.068442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:52.574 [2024-12-14 12:57:52.068447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:52.574 [2024-12-14 12:57:52.068452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.574 [2024-12-14 12:57:52.068457] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:52.574 [2024-12-14 12:57:52.068464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:52.574 [2024-12-14 12:57:52.068469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:52.574 [2024-12-14 12:57:52.068477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:52.574 [2024-12-14 12:57:52.068483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:52.574 [2024-12-14 12:57:52.068492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:52.574 [2024-12-14 12:57:52.068497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:52.574 [2024-12-14 12:57:52.068504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:52.574 [2024-12-14 12:57:52.068509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:52.574 [2024-12-14 12:57:52.068516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:52.574 [2024-12-14 12:57:52.068522] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:52.574 [2024-12-14 12:57:52.068530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:52.574 [2024-12-14 12:57:52.068538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:52.574 [2024-12-14 12:57:52.068545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:52.574 [2024-12-14 12:57:52.068550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:52.574 [2024-12-14 12:57:52.068557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:52.574 [2024-12-14 12:57:52.068562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:52.574 [2024-12-14 12:57:52.068569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:52.574 [2024-12-14 12:57:52.068574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:52.574 [2024-12-14 12:57:52.068580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:52.574 [2024-12-14 12:57:52.068586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:52.574 [2024-12-14 12:57:52.068595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:52.574 [2024-12-14 12:57:52.068601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:52.574 [2024-12-14 12:57:52.068607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:52.574 [2024-12-14 12:57:52.068612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:52.574 [2024-12-14 12:57:52.068620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:52.574 [2024-12-14 12:57:52.068625] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:52.574 [2024-12-14 12:57:52.068632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:52.574 [2024-12-14 12:57:52.068639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:52.574 [2024-12-14 12:57:52.068645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:52.574 [2024-12-14 12:57:52.068651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:52.574 [2024-12-14 12:57:52.068658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:52.574 [2024-12-14 12:57:52.068664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:52.574 [2024-12-14 12:57:52.068670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:52.574 [2024-12-14 12:57:52.068676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:31:52.574 [2024-12-14 12:57:52.068682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:52.574 [2024-12-14 12:57:52.068710] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:52.574 [2024-12-14 12:57:52.068721] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:55.879 [2024-12-14 12:57:55.371958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.879 [2024-12-14 12:57:55.372074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:55.879 [2024-12-14 12:57:55.372096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3303.230 ms 00:31:55.879 [2024-12-14 12:57:55.372111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.879 [2024-12-14 12:57:55.410524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.879 [2024-12-14 12:57:55.410595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:55.879 [2024-12-14 12:57:55.410612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.144 ms 00:31:55.879 [2024-12-14 12:57:55.410624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.879 [2024-12-14 12:57:55.410813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.879 [2024-12-14 12:57:55.410831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:55.879 [2024-12-14 12:57:55.410842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:31:55.879 [2024-12-14 12:57:55.410861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.879 [2024-12-14 12:57:55.451422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.879 [2024-12-14 12:57:55.451483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:55.879 [2024-12-14 12:57:55.451496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.520 ms 00:31:55.879 [2024-12-14 12:57:55.451509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.879 [2024-12-14 12:57:55.451551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.879 [2024-12-14 12:57:55.451566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:55.879 [2024-12-14 12:57:55.451577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:55.879 [2024-12-14 12:57:55.451598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.879 [2024-12-14 12:57:55.452361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.879 [2024-12-14 12:57:55.452406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:55.879 [2024-12-14 12:57:55.452420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:31:55.879 [2024-12-14 12:57:55.452432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.879 [2024-12-14 12:57:55.452557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.879 [2024-12-14 12:57:55.452573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:55.879 [2024-12-14 12:57:55.452587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:31:55.879 [2024-12-14 12:57:55.452604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.879 [2024-12-14 12:57:55.473261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.879 [2024-12-14 12:57:55.473314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:55.879 [2024-12-14 12:57:55.473328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.636 ms 00:31:55.879 [2024-12-14 12:57:55.473340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.879 [2024-12-14 12:57:55.502985] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:55.879 [2024-12-14 12:57:55.508132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.879 [2024-12-14 12:57:55.508179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:55.879 [2024-12-14 12:57:55.508196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.647 ms 00:31:55.879 [2024-12-14 12:57:55.508205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.879 [2024-12-14 12:57:55.601780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.879 [2024-12-14 12:57:55.601839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:55.879 [2024-12-14 12:57:55.601857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.522 ms 00:31:55.879 [2024-12-14 12:57:55.601867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:55.879 [2024-12-14 12:57:55.602122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:55.879 [2024-12-14 12:57:55.602142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:55.879 [2024-12-14 12:57:55.602158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:31:55.879 [2024-12-14 12:57:55.602167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.140 [2024-12-14 12:57:55.628707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.140 [2024-12-14 12:57:55.628761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:56.140 [2024-12-14 12:57:55.628779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.477 ms 00:31:56.140 [2024-12-14 12:57:55.628790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.140 [2024-12-14 12:57:55.654083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.140 [2024-12-14 12:57:55.654131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:56.140 [2024-12-14 12:57:55.654148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.230 ms 00:31:56.140 [2024-12-14 12:57:55.654157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.140 [2024-12-14 12:57:55.654796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.140 [2024-12-14 12:57:55.654844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:56.140 [2024-12-14 12:57:55.654858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:31:56.140 [2024-12-14 12:57:55.654870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.140 [2024-12-14 12:57:55.745836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.140 [2024-12-14 12:57:55.745889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:56.140 [2024-12-14 12:57:55.745910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.917 ms 00:31:56.140 [2024-12-14 12:57:55.745919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.140 [2024-12-14 12:57:55.775080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.140 [2024-12-14 12:57:55.775131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:56.140 [2024-12-14 12:57:55.775148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.073 ms 00:31:56.140 [2024-12-14 12:57:55.775158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.140 [2024-12-14 12:57:55.801306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.140 [2024-12-14 12:57:55.801354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:56.140 [2024-12-14 12:57:55.801369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.088 ms 00:31:56.140 [2024-12-14 12:57:55.801390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.140 [2024-12-14 12:57:55.828121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.140 [2024-12-14 12:57:55.828169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:56.140 [2024-12-14 12:57:55.828198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.671 ms 00:31:56.140 [2024-12-14 12:57:55.828207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.140 [2024-12-14 12:57:55.828269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.140 [2024-12-14 12:57:55.828280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:56.141 [2024-12-14 12:57:55.828298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:56.141 [2024-12-14 12:57:55.828307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.141 [2024-12-14 12:57:55.828417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.141 [2024-12-14 12:57:55.828434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:56.141 [2024-12-14 12:57:55.828446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:31:56.141 [2024-12-14 12:57:55.828455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.141 [2024-12-14 12:57:55.829907] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3771.396 ms, result 0 00:31:56.141 { 00:31:56.141 "name": "ftl0", 00:31:56.141 "uuid": "a4449f4f-a8d5-417d-b7f9-195ffc128a42" 00:31:56.141 } 00:31:56.141 12:57:55 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:56.141 12:57:55 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:56.402 12:57:56 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:56.402 12:57:56 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:56.664 [2024-12-14 12:57:56.269047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.664 [2024-12-14 12:57:56.269143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:56.664 [2024-12-14 12:57:56.269158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:56.664 [2024-12-14 12:57:56.269170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.664 [2024-12-14 12:57:56.269201] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:56.664 [2024-12-14 12:57:56.272564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.664 [2024-12-14 12:57:56.272612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:56.664 [2024-12-14 12:57:56.272626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.338 ms 00:31:56.664 [2024-12-14 12:57:56.272635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.664 [2024-12-14 12:57:56.272941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.664 [2024-12-14 12:57:56.272964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:56.664 [2024-12-14 12:57:56.272977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:31:56.664 [2024-12-14 12:57:56.272985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.664 [2024-12-14 12:57:56.276264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.664 [2024-12-14 12:57:56.276292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:56.664 [2024-12-14 12:57:56.276305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:31:56.664 [2024-12-14 12:57:56.276313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.664 [2024-12-14 12:57:56.282507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.664 [2024-12-14 12:57:56.282551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:56.664 [2024-12-14 12:57:56.282570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.170 ms 00:31:56.664 [2024-12-14 12:57:56.282579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.664 [2024-12-14 12:57:56.310328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.664 [2024-12-14 12:57:56.310380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:56.664 [2024-12-14 12:57:56.310397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.654 ms 00:31:56.664 [2024-12-14 12:57:56.310405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.664 [2024-12-14 12:57:56.329067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.664 [2024-12-14 12:57:56.329118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:56.664 [2024-12-14 12:57:56.329135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.588 ms 00:31:56.664 [2024-12-14 12:57:56.329144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.664 [2024-12-14 12:57:56.329329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.664 [2024-12-14 12:57:56.329345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:56.664 [2024-12-14 12:57:56.329358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:31:56.664 [2024-12-14 12:57:56.329367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.664 [2024-12-14 12:57:56.355683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.664 [2024-12-14 12:57:56.355733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:56.664 [2024-12-14 12:57:56.355748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.272 ms 00:31:56.664 [2024-12-14 12:57:56.355757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.664 [2024-12-14 12:57:56.381209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.664 [2024-12-14 12:57:56.381256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:56.664 [2024-12-14 12:57:56.381271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.393 ms 00:31:56.664 [2024-12-14 12:57:56.381280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.929 [2024-12-14 12:57:56.406391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.929 [2024-12-14 12:57:56.406439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:56.929 [2024-12-14 12:57:56.406454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.049 ms 00:31:56.929 [2024-12-14 12:57:56.406461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.929 [2024-12-14 12:57:56.431462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.929 [2024-12-14 12:57:56.431510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:56.929 [2024-12-14 12:57:56.431527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.879 ms 00:31:56.929 [2024-12-14 12:57:56.431535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.929 [2024-12-14 12:57:56.431589] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:56.929 [2024-12-14 12:57:56.431607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.431988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:56.929 [2024-12-14 12:57:56.432261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:56.930 [2024-12-14 12:57:56.432643] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:56.930 [2024-12-14 12:57:56.432653] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4449f4f-a8d5-417d-b7f9-195ffc128a42 00:31:56.930 [2024-12-14 12:57:56.432661] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:56.930 [2024-12-14 12:57:56.432674] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:56.930 [2024-12-14 12:57:56.432686] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:56.930 [2024-12-14 12:57:56.432697] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:56.930 [2024-12-14 12:57:56.432705] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:56.930 [2024-12-14 12:57:56.432716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:56.930 [2024-12-14 12:57:56.432723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:56.930 [2024-12-14 12:57:56.432732] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:56.930 [2024-12-14 12:57:56.432738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:56.930 [2024-12-14 12:57:56.432748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.930 [2024-12-14 12:57:56.432757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:56.930 [2024-12-14 12:57:56.432769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:31:56.930 [2024-12-14 12:57:56.432780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.930 [2024-12-14 12:57:56.447618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.930 [2024-12-14 12:57:56.447661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:56.930 [2024-12-14 12:57:56.447676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.791 ms 00:31:56.930 [2024-12-14 12:57:56.447685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.930 [2024-12-14 12:57:56.448151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:56.930 [2024-12-14 12:57:56.448235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:56.930 [2024-12-14 12:57:56.448253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:31:56.930 [2024-12-14 12:57:56.448261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.930 [2024-12-14 12:57:56.498603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.930 [2024-12-14 12:57:56.498653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:56.930 [2024-12-14 12:57:56.498669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.930 [2024-12-14 12:57:56.498679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.930 [2024-12-14 12:57:56.498761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.930 [2024-12-14 12:57:56.498770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:56.930 [2024-12-14 12:57:56.498786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.930 [2024-12-14 12:57:56.498795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.930 [2024-12-14 12:57:56.498898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.930 [2024-12-14 12:57:56.498911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:56.930 [2024-12-14 12:57:56.498923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.930 [2024-12-14 12:57:56.498932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.930 [2024-12-14 12:57:56.498958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.930 [2024-12-14 12:57:56.498966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:56.930 [2024-12-14 12:57:56.498978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.930 [2024-12-14 12:57:56.498991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.930 [2024-12-14 12:57:56.587707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.930 [2024-12-14 12:57:56.587765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:56.930 [2024-12-14 12:57:56.587781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.930 [2024-12-14 12:57:56.587789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.930 [2024-12-14 12:57:56.645291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.930 [2024-12-14 12:57:56.645341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:56.930 [2024-12-14 12:57:56.645355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.930 [2024-12-14 12:57:56.645366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.930 [2024-12-14 12:57:56.645470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.930 [2024-12-14 12:57:56.645480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:56.930 [2024-12-14 12:57:56.645490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.930 [2024-12-14 12:57:56.645496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.930 [2024-12-14 12:57:56.645565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.930 [2024-12-14 12:57:56.645576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:56.930 [2024-12-14 12:57:56.645586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.930 [2024-12-14 12:57:56.645594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.930 [2024-12-14 12:57:56.645690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.930 [2024-12-14 12:57:56.645699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:56.930 [2024-12-14 12:57:56.645709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.930 [2024-12-14 12:57:56.645717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.930 [2024-12-14 12:57:56.645750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.930 [2024-12-14 12:57:56.645759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:56.930 [2024-12-14 12:57:56.645767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.930 [2024-12-14 12:57:56.645774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.930 [2024-12-14 12:57:56.645821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.930 [2024-12-14 12:57:56.645830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:56.930 [2024-12-14 12:57:56.645839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.930 [2024-12-14 12:57:56.645846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.930 [2024-12-14 12:57:56.645899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:56.931 [2024-12-14 12:57:56.645911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:56.931 [2024-12-14 12:57:56.645919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:56.931 [2024-12-14 12:57:56.645926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:56.931 [2024-12-14 12:57:56.646110] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.987 ms, result 0 00:31:56.931 true 00:31:57.189 12:57:56 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 85867 00:31:57.189 12:57:56 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 85867 ']' 00:31:57.189 12:57:56 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 85867 00:31:57.189 12:57:56 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:31:57.189 12:57:56 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:57.189 12:57:56 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85867 00:31:57.189 12:57:56 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:57.189 killing process with pid 85867 00:31:57.189 12:57:56 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:57.189 12:57:56 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85867' 00:31:57.189 12:57:56 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 85867 00:31:57.189 12:57:56 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 85867 00:32:03.769 12:58:02 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:32:07.070 262144+0 records in 00:32:07.070 262144+0 records out 00:32:07.070 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.94305 s, 272 MB/s 00:32:07.070 12:58:06 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:32:08.987 12:58:08 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:32:08.987 [2024-12-14 12:58:08.322258] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:08.987 [2024-12-14 12:58:08.322351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86106 ] 00:32:08.987 [2024-12-14 12:58:08.476409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:08.987 [2024-12-14 12:58:08.605834] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.248 [2024-12-14 12:58:08.944434] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:09.248 [2024-12-14 12:58:08.944529] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:09.510 [2024-12-14 12:58:09.106091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.510 [2024-12-14 12:58:09.106164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:09.510 [2024-12-14 12:58:09.106182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:09.510 [2024-12-14 12:58:09.106191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.510 [2024-12-14 12:58:09.106255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.510 [2024-12-14 12:58:09.106270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:09.510 [2024-12-14 12:58:09.106280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:32:09.510 [2024-12-14 12:58:09.106289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.510 [2024-12-14 12:58:09.106314] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:09.510 [2024-12-14 12:58:09.107104] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:09.510 [2024-12-14 12:58:09.107134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.510 [2024-12-14 12:58:09.107143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:09.510 [2024-12-14 12:58:09.107153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:32:09.510 [2024-12-14 12:58:09.107162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.510 [2024-12-14 12:58:09.109429] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:32:09.510 [2024-12-14 12:58:09.124952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.510 [2024-12-14 12:58:09.125009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:09.510 [2024-12-14 12:58:09.125025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.526 ms 00:32:09.510 [2024-12-14 12:58:09.125034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.510 [2024-12-14 12:58:09.125145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.510 [2024-12-14 12:58:09.125159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:09.510 [2024-12-14 12:58:09.125169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:32:09.510 [2024-12-14 12:58:09.125178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.510 [2024-12-14 12:58:09.136921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.510 [2024-12-14 12:58:09.136969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:09.510 [2024-12-14 12:58:09.136981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.657 ms 00:32:09.510 [2024-12-14 12:58:09.136996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.511 [2024-12-14 12:58:09.137109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.511 [2024-12-14 12:58:09.137122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:09.511 [2024-12-14 12:58:09.137132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:32:09.511 [2024-12-14 12:58:09.137140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.511 [2024-12-14 12:58:09.137204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.511 [2024-12-14 12:58:09.137215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:09.511 [2024-12-14 12:58:09.137224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:09.511 [2024-12-14 12:58:09.137233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.511 [2024-12-14 12:58:09.137263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:09.511 [2024-12-14 12:58:09.141848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.511 [2024-12-14 12:58:09.141895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:09.511 [2024-12-14 12:58:09.141910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.592 ms 00:32:09.511 [2024-12-14 12:58:09.141918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.511 [2024-12-14 12:58:09.141960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.511 [2024-12-14 12:58:09.141969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:09.511 [2024-12-14 12:58:09.141979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:09.511 [2024-12-14 12:58:09.141987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.511 [2024-12-14 12:58:09.142027] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:09.511 [2024-12-14 12:58:09.142071] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:09.511 [2024-12-14 12:58:09.142116] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:09.511 [2024-12-14 12:58:09.142138] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:09.511 [2024-12-14 12:58:09.142253] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:09.511 [2024-12-14 12:58:09.142265] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:09.511 [2024-12-14 12:58:09.142276] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:09.511 [2024-12-14 12:58:09.142288] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:09.511 [2024-12-14 12:58:09.142297] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:09.511 [2024-12-14 12:58:09.142307] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:09.511 [2024-12-14 12:58:09.142316] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:09.511 [2024-12-14 12:58:09.142324] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:09.511 [2024-12-14 12:58:09.142337] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:09.511 [2024-12-14 12:58:09.142346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.511 [2024-12-14 12:58:09.142355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:09.511 [2024-12-14 12:58:09.142364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:32:09.511 [2024-12-14 12:58:09.142372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.511 [2024-12-14 12:58:09.142455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.511 [2024-12-14 12:58:09.142474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:09.511 [2024-12-14 12:58:09.142483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:09.511 [2024-12-14 12:58:09.142491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.511 [2024-12-14 12:58:09.142597] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:09.511 [2024-12-14 12:58:09.142617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:09.511 [2024-12-14 12:58:09.142626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:09.511 [2024-12-14 12:58:09.142635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.511 [2024-12-14 12:58:09.142644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:09.511 [2024-12-14 12:58:09.142652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:09.511 [2024-12-14 12:58:09.142660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:09.511 [2024-12-14 12:58:09.142668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:09.511 [2024-12-14 12:58:09.142675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:09.511 [2024-12-14 12:58:09.142681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:09.511 [2024-12-14 12:58:09.142689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:09.511 [2024-12-14 12:58:09.142697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:09.511 [2024-12-14 12:58:09.142708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:09.511 [2024-12-14 12:58:09.142724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:09.511 [2024-12-14 12:58:09.142731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:09.511 [2024-12-14 12:58:09.142738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.511 [2024-12-14 12:58:09.142745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:09.511 [2024-12-14 12:58:09.142752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:09.511 [2024-12-14 12:58:09.142758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.511 [2024-12-14 12:58:09.142765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:09.511 [2024-12-14 12:58:09.142772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:09.511 [2024-12-14 12:58:09.142779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.511 [2024-12-14 12:58:09.142785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:09.511 [2024-12-14 12:58:09.142792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:09.511 [2024-12-14 12:58:09.142800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.511 [2024-12-14 12:58:09.142806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:09.511 [2024-12-14 12:58:09.142813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:09.511 [2024-12-14 12:58:09.142821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.511 [2024-12-14 12:58:09.142828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:09.511 [2024-12-14 12:58:09.142836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:09.511 [2024-12-14 12:58:09.142844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:09.511 [2024-12-14 12:58:09.142851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:09.511 [2024-12-14 12:58:09.142858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:09.511 [2024-12-14 12:58:09.142866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:09.511 [2024-12-14 12:58:09.142874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:09.511 [2024-12-14 12:58:09.142881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:09.511 [2024-12-14 12:58:09.142887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:09.511 [2024-12-14 12:58:09.142894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:09.511 [2024-12-14 12:58:09.142902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:09.511 [2024-12-14 12:58:09.142909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.511 [2024-12-14 12:58:09.142917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:09.511 [2024-12-14 12:58:09.142925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:09.511 [2024-12-14 12:58:09.142932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.511 [2024-12-14 12:58:09.142941] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:09.511 [2024-12-14 12:58:09.142949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:09.511 [2024-12-14 12:58:09.142957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:09.511 [2024-12-14 12:58:09.142964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:09.511 [2024-12-14 12:58:09.142973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:09.511 [2024-12-14 12:58:09.142980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:09.511 [2024-12-14 12:58:09.142987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:09.511 [2024-12-14 12:58:09.142994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:09.511 [2024-12-14 12:58:09.143000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:09.511 [2024-12-14 12:58:09.143007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:09.511 [2024-12-14 12:58:09.143016] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:09.511 [2024-12-14 12:58:09.143027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:09.511 [2024-12-14 12:58:09.143039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:09.511 [2024-12-14 12:58:09.143047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:09.511 [2024-12-14 12:58:09.143055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:09.511 [2024-12-14 12:58:09.143079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:09.511 [2024-12-14 12:58:09.143086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:09.511 [2024-12-14 12:58:09.143096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:09.511 [2024-12-14 12:58:09.143104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:09.511 [2024-12-14 12:58:09.143113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:09.511 [2024-12-14 12:58:09.143120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:09.511 [2024-12-14 12:58:09.143129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:09.511 [2024-12-14 12:58:09.143137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:09.511 [2024-12-14 12:58:09.143145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:09.512 [2024-12-14 12:58:09.143153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:09.512 [2024-12-14 12:58:09.143161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:09.512 [2024-12-14 12:58:09.143170] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:09.512 [2024-12-14 12:58:09.143179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:09.512 [2024-12-14 12:58:09.143188] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:09.512 [2024-12-14 12:58:09.143195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:09.512 [2024-12-14 12:58:09.143203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:09.512 [2024-12-14 12:58:09.143210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:09.512 [2024-12-14 12:58:09.143217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.512 [2024-12-14 12:58:09.143227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:09.512 [2024-12-14 12:58:09.143236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:32:09.512 [2024-12-14 12:58:09.143244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.512 [2024-12-14 12:58:09.181798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.512 [2024-12-14 12:58:09.181858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:09.512 [2024-12-14 12:58:09.181871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.505 ms 00:32:09.512 [2024-12-14 12:58:09.181885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.512 [2024-12-14 12:58:09.181974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.512 [2024-12-14 12:58:09.181983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:09.512 [2024-12-14 12:58:09.181993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:32:09.512 [2024-12-14 12:58:09.182001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.512 [2024-12-14 12:58:09.232383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.512 [2024-12-14 12:58:09.232445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:09.512 [2024-12-14 12:58:09.232460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.291 ms 00:32:09.512 [2024-12-14 12:58:09.232470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.512 [2024-12-14 12:58:09.232526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.512 [2024-12-14 12:58:09.232538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:09.512 [2024-12-14 12:58:09.232552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:09.512 [2024-12-14 12:58:09.232561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.512 [2024-12-14 12:58:09.233351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.512 [2024-12-14 12:58:09.233414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:09.512 [2024-12-14 12:58:09.233427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.703 ms 00:32:09.512 [2024-12-14 12:58:09.233436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.512 [2024-12-14 12:58:09.233620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.512 [2024-12-14 12:58:09.233631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:09.512 [2024-12-14 12:58:09.233645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:32:09.512 [2024-12-14 12:58:09.233654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.248637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.774 [2024-12-14 12:58:09.248671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:09.774 [2024-12-14 12:58:09.248681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.962 ms 00:32:09.774 [2024-12-14 12:58:09.248688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.261494] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:32:09.774 [2024-12-14 12:58:09.261533] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:09.774 [2024-12-14 12:58:09.261545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.774 [2024-12-14 12:58:09.261553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:09.774 [2024-12-14 12:58:09.261562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.765 ms 00:32:09.774 [2024-12-14 12:58:09.261570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.286020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.774 [2024-12-14 12:58:09.286072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:09.774 [2024-12-14 12:58:09.286083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.411 ms 00:32:09.774 [2024-12-14 12:58:09.286091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.297489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.774 [2024-12-14 12:58:09.297521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:09.774 [2024-12-14 12:58:09.297531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.359 ms 00:32:09.774 [2024-12-14 12:58:09.297538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.309496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.774 [2024-12-14 12:58:09.309528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:09.774 [2024-12-14 12:58:09.309538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.926 ms 00:32:09.774 [2024-12-14 12:58:09.309545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.310160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.774 [2024-12-14 12:58:09.310179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:09.774 [2024-12-14 12:58:09.310189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:32:09.774 [2024-12-14 12:58:09.310199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.369526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.774 [2024-12-14 12:58:09.369572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:09.774 [2024-12-14 12:58:09.369586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.310 ms 00:32:09.774 [2024-12-14 12:58:09.369598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.380472] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:09.774 [2024-12-14 12:58:09.383345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.774 [2024-12-14 12:58:09.383475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:09.774 [2024-12-14 12:58:09.383493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.708 ms 00:32:09.774 [2024-12-14 12:58:09.383502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.383594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.774 [2024-12-14 12:58:09.383605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:09.774 [2024-12-14 12:58:09.383614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:09.774 [2024-12-14 12:58:09.383622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.383691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.774 [2024-12-14 12:58:09.383701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:09.774 [2024-12-14 12:58:09.383709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:09.774 [2024-12-14 12:58:09.383717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.383736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.774 [2024-12-14 12:58:09.383745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:09.774 [2024-12-14 12:58:09.383753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:09.774 [2024-12-14 12:58:09.383761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.383795] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:09.774 [2024-12-14 12:58:09.383807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.774 [2024-12-14 12:58:09.383815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:09.774 [2024-12-14 12:58:09.383823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:32:09.774 [2024-12-14 12:58:09.383831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.407933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.774 [2024-12-14 12:58:09.407969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:09.774 [2024-12-14 12:58:09.407981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.085 ms 00:32:09.774 [2024-12-14 12:58:09.407994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.408076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:09.774 [2024-12-14 12:58:09.408086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:09.774 [2024-12-14 12:58:09.408095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:32:09.774 [2024-12-14 12:58:09.408102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:09.774 [2024-12-14 12:58:09.409230] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.705 ms, result 0 00:32:10.718  [2024-12-14T12:58:11.841Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-14T12:58:12.785Z] Copying: 26/1024 [MB] (11 MBps) [2024-12-14T12:58:13.729Z] Copying: 43/1024 [MB] (16 MBps) [2024-12-14T12:58:14.675Z] Copying: 58/1024 [MB] (15 MBps) [2024-12-14T12:58:15.616Z] Copying: 73/1024 [MB] (14 MBps) [2024-12-14T12:58:16.549Z] Copying: 88/1024 [MB] (15 MBps) [2024-12-14T12:58:17.483Z] Copying: 105/1024 [MB] (16 MBps) [2024-12-14T12:58:18.856Z] Copying: 123/1024 [MB] (18 MBps) [2024-12-14T12:58:19.424Z] Copying: 136/1024 [MB] (12 MBps) [2024-12-14T12:58:20.802Z] Copying: 147/1024 [MB] (11 MBps) [2024-12-14T12:58:21.741Z] Copying: 158/1024 [MB] (10 MBps) [2024-12-14T12:58:22.680Z] Copying: 169/1024 [MB] (11 MBps) [2024-12-14T12:58:23.615Z] Copying: 180/1024 [MB] (10 MBps) [2024-12-14T12:58:24.549Z] Copying: 191/1024 [MB] (11 MBps) [2024-12-14T12:58:25.488Z] Copying: 203/1024 [MB] (11 MBps) [2024-12-14T12:58:26.466Z] Copying: 214/1024 [MB] (10 MBps) [2024-12-14T12:58:27.432Z] Copying: 225/1024 [MB] (10 MBps) [2024-12-14T12:58:28.817Z] Copying: 236/1024 [MB] (11 MBps) [2024-12-14T12:58:29.752Z] Copying: 249/1024 [MB] (13 MBps) [2024-12-14T12:58:30.690Z] Copying: 260/1024 [MB] (10 MBps) [2024-12-14T12:58:31.630Z] Copying: 271/1024 [MB] (11 MBps) [2024-12-14T12:58:32.566Z] Copying: 281/1024 [MB] (10 MBps) [2024-12-14T12:58:33.503Z] Copying: 291/1024 [MB] (10 MBps) [2024-12-14T12:58:34.440Z] Copying: 303/1024 [MB] (11 MBps) [2024-12-14T12:58:35.819Z] Copying: 313/1024 [MB] (10 MBps) [2024-12-14T12:58:36.758Z] Copying: 324/1024 [MB] (10 MBps) [2024-12-14T12:58:37.696Z] Copying: 334/1024 [MB] (10 MBps) [2024-12-14T12:58:38.636Z] Copying: 353248/1048576 [kB] (10228 kBps) [2024-12-14T12:58:39.578Z] Copying: 363480/1048576 [kB] (10232 kBps) [2024-12-14T12:58:40.518Z] Copying: 364/1024 [MB] (10 MBps) [2024-12-14T12:58:41.460Z] Copying: 383944/1048576 [kB] (10208 kBps) [2024-12-14T12:58:42.847Z] Copying: 386/1024 [MB] (11 MBps) [2024-12-14T12:58:43.789Z] Copying: 397/1024 [MB] (10 MBps) [2024-12-14T12:58:44.731Z] Copying: 444/1024 [MB] (47 MBps) [2024-12-14T12:58:45.678Z] Copying: 471/1024 [MB] (26 MBps) [2024-12-14T12:58:46.621Z] Copying: 492/1024 [MB] (21 MBps) [2024-12-14T12:58:47.564Z] Copying: 511/1024 [MB] (18 MBps) [2024-12-14T12:58:48.508Z] Copying: 530/1024 [MB] (18 MBps) [2024-12-14T12:58:49.450Z] Copying: 580/1024 [MB] (50 MBps) [2024-12-14T12:58:50.836Z] Copying: 616/1024 [MB] (35 MBps) [2024-12-14T12:58:51.777Z] Copying: 657/1024 [MB] (41 MBps) [2024-12-14T12:58:52.724Z] Copying: 707/1024 [MB] (50 MBps) [2024-12-14T12:58:53.666Z] Copying: 729/1024 [MB] (21 MBps) [2024-12-14T12:58:54.632Z] Copying: 748/1024 [MB] (18 MBps) [2024-12-14T12:58:55.635Z] Copying: 768/1024 [MB] (19 MBps) [2024-12-14T12:58:56.581Z] Copying: 785/1024 [MB] (17 MBps) [2024-12-14T12:58:57.524Z] Copying: 804/1024 [MB] (19 MBps) [2024-12-14T12:58:58.469Z] Copying: 829/1024 [MB] (24 MBps) [2024-12-14T12:58:59.856Z] Copying: 849/1024 [MB] (20 MBps) [2024-12-14T12:59:00.429Z] Copying: 866/1024 [MB] (16 MBps) [2024-12-14T12:59:01.817Z] Copying: 886/1024 [MB] (19 MBps) [2024-12-14T12:59:02.757Z] Copying: 901/1024 [MB] (15 MBps) [2024-12-14T12:59:03.701Z] Copying: 914/1024 [MB] (12 MBps) [2024-12-14T12:59:04.645Z] Copying: 929/1024 [MB] (15 MBps) [2024-12-14T12:59:05.589Z] Copying: 946/1024 [MB] (16 MBps) [2024-12-14T12:59:06.532Z] Copying: 960/1024 [MB] (14 MBps) [2024-12-14T12:59:07.476Z] Copying: 982/1024 [MB] (21 MBps) [2024-12-14T12:59:08.421Z] Copying: 1003/1024 [MB] (21 MBps) [2024-12-14T12:59:08.421Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-14 12:59:08.317408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.684 [2024-12-14 12:59:08.317446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:08.684 [2024-12-14 12:59:08.317457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:08.684 [2024-12-14 12:59:08.317464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.684 [2024-12-14 12:59:08.317480] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:08.684 [2024-12-14 12:59:08.319653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.684 [2024-12-14 12:59:08.319784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:08.684 [2024-12-14 12:59:08.319798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.162 ms 00:33:08.684 [2024-12-14 12:59:08.319809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.684 [2024-12-14 12:59:08.321677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.684 [2024-12-14 12:59:08.321709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:08.684 [2024-12-14 12:59:08.321719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.848 ms 00:33:08.684 [2024-12-14 12:59:08.321725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.684 [2024-12-14 12:59:08.321746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.684 [2024-12-14 12:59:08.321754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:08.684 [2024-12-14 12:59:08.321760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:08.684 [2024-12-14 12:59:08.321766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.684 [2024-12-14 12:59:08.321806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.684 [2024-12-14 12:59:08.321813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:08.684 [2024-12-14 12:59:08.321820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:08.684 [2024-12-14 12:59:08.321826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.684 [2024-12-14 12:59:08.321836] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:08.684 [2024-12-14 12:59:08.321846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.321945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.322013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.322019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.322025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.322030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.322036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.322042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.322047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:08.684 [2024-12-14 12:59:08.322065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:08.685 [2024-12-14 12:59:08.322515] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:08.685 [2024-12-14 12:59:08.322521] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4449f4f-a8d5-417d-b7f9-195ffc128a42 00:33:08.685 [2024-12-14 12:59:08.322527] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:08.685 [2024-12-14 12:59:08.322533] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:08.685 [2024-12-14 12:59:08.322538] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:08.685 [2024-12-14 12:59:08.322546] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:08.685 [2024-12-14 12:59:08.322552] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:08.685 [2024-12-14 12:59:08.322558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:08.685 [2024-12-14 12:59:08.322565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:08.685 [2024-12-14 12:59:08.322570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:08.685 [2024-12-14 12:59:08.322575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:08.685 [2024-12-14 12:59:08.322580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.685 [2024-12-14 12:59:08.322586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:08.685 [2024-12-14 12:59:08.322591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:33:08.685 [2024-12-14 12:59:08.322597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.685 [2024-12-14 12:59:08.332409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.685 [2024-12-14 12:59:08.332439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:08.685 [2024-12-14 12:59:08.332447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.801 ms 00:33:08.685 [2024-12-14 12:59:08.332452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.685 [2024-12-14 12:59:08.332721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:08.685 [2024-12-14 12:59:08.332728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:08.685 [2024-12-14 12:59:08.332734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:33:08.686 [2024-12-14 12:59:08.332740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.686 [2024-12-14 12:59:08.358529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.686 [2024-12-14 12:59:08.358641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:08.686 [2024-12-14 12:59:08.358654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.686 [2024-12-14 12:59:08.358660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.686 [2024-12-14 12:59:08.358704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.686 [2024-12-14 12:59:08.358710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:08.686 [2024-12-14 12:59:08.358716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.686 [2024-12-14 12:59:08.358722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.686 [2024-12-14 12:59:08.358756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.686 [2024-12-14 12:59:08.358767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:08.686 [2024-12-14 12:59:08.358773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.686 [2024-12-14 12:59:08.358779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.686 [2024-12-14 12:59:08.358790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.686 [2024-12-14 12:59:08.358795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:08.686 [2024-12-14 12:59:08.358804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.686 [2024-12-14 12:59:08.358809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.686 [2024-12-14 12:59:08.417523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.686 [2024-12-14 12:59:08.417633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:08.686 [2024-12-14 12:59:08.417645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.686 [2024-12-14 12:59:08.417651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.947 [2024-12-14 12:59:08.466488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.947 [2024-12-14 12:59:08.466520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:08.947 [2024-12-14 12:59:08.466528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.947 [2024-12-14 12:59:08.466535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.947 [2024-12-14 12:59:08.466585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.947 [2024-12-14 12:59:08.466592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:08.947 [2024-12-14 12:59:08.466602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.947 [2024-12-14 12:59:08.466608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.947 [2024-12-14 12:59:08.466637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.947 [2024-12-14 12:59:08.466644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:08.947 [2024-12-14 12:59:08.466650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.947 [2024-12-14 12:59:08.466656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.947 [2024-12-14 12:59:08.466714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.947 [2024-12-14 12:59:08.466721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:08.947 [2024-12-14 12:59:08.466732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.947 [2024-12-14 12:59:08.466740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.947 [2024-12-14 12:59:08.466759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.947 [2024-12-14 12:59:08.466765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:08.947 [2024-12-14 12:59:08.466771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.947 [2024-12-14 12:59:08.466776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.947 [2024-12-14 12:59:08.466805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.947 [2024-12-14 12:59:08.466811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:08.947 [2024-12-14 12:59:08.466817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.947 [2024-12-14 12:59:08.466825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.947 [2024-12-14 12:59:08.466860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:08.947 [2024-12-14 12:59:08.466867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:08.947 [2024-12-14 12:59:08.466874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:08.947 [2024-12-14 12:59:08.466880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:08.947 [2024-12-14 12:59:08.466968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 149.552 ms, result 0 00:33:09.518 00:33:09.518 00:33:09.518 12:59:09 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:33:09.518 [2024-12-14 12:59:09.188812] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:33:09.518 [2024-12-14 12:59:09.188930] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86714 ] 00:33:09.779 [2024-12-14 12:59:09.340739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.779 [2024-12-14 12:59:09.417037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:10.038 [2024-12-14 12:59:09.626963] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:10.038 [2024-12-14 12:59:09.627018] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:10.302 [2024-12-14 12:59:09.781943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.302 [2024-12-14 12:59:09.781990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:10.302 [2024-12-14 12:59:09.782003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:10.302 [2024-12-14 12:59:09.782011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.302 [2024-12-14 12:59:09.782078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.302 [2024-12-14 12:59:09.782091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:10.302 [2024-12-14 12:59:09.782100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:33:10.302 [2024-12-14 12:59:09.782107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.302 [2024-12-14 12:59:09.782124] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:10.302 [2024-12-14 12:59:09.782824] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:10.302 [2024-12-14 12:59:09.782846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.302 [2024-12-14 12:59:09.782854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:10.302 [2024-12-14 12:59:09.782863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:33:10.302 [2024-12-14 12:59:09.782870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.302 [2024-12-14 12:59:09.783131] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:10.302 [2024-12-14 12:59:09.783153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.302 [2024-12-14 12:59:09.783163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:10.302 [2024-12-14 12:59:09.783173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:33:10.302 [2024-12-14 12:59:09.783180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.302 [2024-12-14 12:59:09.783223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.302 [2024-12-14 12:59:09.783232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:10.302 [2024-12-14 12:59:09.783240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:33:10.302 [2024-12-14 12:59:09.783247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.302 [2024-12-14 12:59:09.783536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.302 [2024-12-14 12:59:09.783547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:10.302 [2024-12-14 12:59:09.783555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:33:10.302 [2024-12-14 12:59:09.783562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.302 [2024-12-14 12:59:09.783623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.302 [2024-12-14 12:59:09.783631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:10.302 [2024-12-14 12:59:09.783638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:33:10.302 [2024-12-14 12:59:09.783645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.302 [2024-12-14 12:59:09.783666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.302 [2024-12-14 12:59:09.783675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:10.302 [2024-12-14 12:59:09.783685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:10.302 [2024-12-14 12:59:09.783692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.302 [2024-12-14 12:59:09.783708] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:10.302 [2024-12-14 12:59:09.787404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.302 [2024-12-14 12:59:09.787433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:10.302 [2024-12-14 12:59:09.787442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.700 ms 00:33:10.302 [2024-12-14 12:59:09.787449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.302 [2024-12-14 12:59:09.787485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.302 [2024-12-14 12:59:09.787493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:10.302 [2024-12-14 12:59:09.787501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:10.302 [2024-12-14 12:59:09.787507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.302 [2024-12-14 12:59:09.787551] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:10.302 [2024-12-14 12:59:09.787572] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:10.302 [2024-12-14 12:59:09.787607] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:10.302 [2024-12-14 12:59:09.787621] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:10.302 [2024-12-14 12:59:09.787729] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:10.302 [2024-12-14 12:59:09.787739] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:10.302 [2024-12-14 12:59:09.787749] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:10.302 [2024-12-14 12:59:09.787759] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:10.302 [2024-12-14 12:59:09.787767] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:10.302 [2024-12-14 12:59:09.787777] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:10.302 [2024-12-14 12:59:09.787784] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:10.302 [2024-12-14 12:59:09.787792] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:10.302 [2024-12-14 12:59:09.787799] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:10.302 [2024-12-14 12:59:09.787806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.302 [2024-12-14 12:59:09.787814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:10.302 [2024-12-14 12:59:09.787821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:33:10.302 [2024-12-14 12:59:09.787828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.302 [2024-12-14 12:59:09.787909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.302 [2024-12-14 12:59:09.787917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:10.302 [2024-12-14 12:59:09.787924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:10.302 [2024-12-14 12:59:09.787932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.302 [2024-12-14 12:59:09.788027] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:10.302 [2024-12-14 12:59:09.788036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:10.302 [2024-12-14 12:59:09.788044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:10.302 [2024-12-14 12:59:09.788051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.302 [2024-12-14 12:59:09.788076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:10.302 [2024-12-14 12:59:09.788083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:10.302 [2024-12-14 12:59:09.788090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:10.302 [2024-12-14 12:59:09.788098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:10.302 [2024-12-14 12:59:09.788106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:10.302 [2024-12-14 12:59:09.788112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:10.302 [2024-12-14 12:59:09.788119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:10.302 [2024-12-14 12:59:09.788127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:10.302 [2024-12-14 12:59:09.788133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:10.302 [2024-12-14 12:59:09.788140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:10.303 [2024-12-14 12:59:09.788147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:10.303 [2024-12-14 12:59:09.788159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.303 [2024-12-14 12:59:09.788174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:10.303 [2024-12-14 12:59:09.788181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:10.303 [2024-12-14 12:59:09.788188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.303 [2024-12-14 12:59:09.788195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:10.303 [2024-12-14 12:59:09.788205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:10.303 [2024-12-14 12:59:09.788213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:10.303 [2024-12-14 12:59:09.788220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:10.303 [2024-12-14 12:59:09.788226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:10.303 [2024-12-14 12:59:09.788233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:10.303 [2024-12-14 12:59:09.788239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:10.303 [2024-12-14 12:59:09.788246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:10.303 [2024-12-14 12:59:09.788252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:10.303 [2024-12-14 12:59:09.788258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:10.303 [2024-12-14 12:59:09.788265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:10.303 [2024-12-14 12:59:09.788271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:10.303 [2024-12-14 12:59:09.788278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:10.303 [2024-12-14 12:59:09.788285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:10.303 [2024-12-14 12:59:09.788291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:10.303 [2024-12-14 12:59:09.788298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:10.303 [2024-12-14 12:59:09.788304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:10.303 [2024-12-14 12:59:09.788311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:10.303 [2024-12-14 12:59:09.788317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:10.303 [2024-12-14 12:59:09.788324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:10.303 [2024-12-14 12:59:09.788330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.303 [2024-12-14 12:59:09.788336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:10.303 [2024-12-14 12:59:09.788343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:10.303 [2024-12-14 12:59:09.788349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.303 [2024-12-14 12:59:09.788355] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:10.303 [2024-12-14 12:59:09.788363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:10.303 [2024-12-14 12:59:09.788370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:10.303 [2024-12-14 12:59:09.788377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:10.303 [2024-12-14 12:59:09.788386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:10.303 [2024-12-14 12:59:09.788393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:10.303 [2024-12-14 12:59:09.788399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:10.303 [2024-12-14 12:59:09.788406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:10.303 [2024-12-14 12:59:09.788412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:10.303 [2024-12-14 12:59:09.788418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:10.303 [2024-12-14 12:59:09.788427] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:10.303 [2024-12-14 12:59:09.788436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:10.303 [2024-12-14 12:59:09.788444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:10.303 [2024-12-14 12:59:09.788451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:10.303 [2024-12-14 12:59:09.788458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:10.303 [2024-12-14 12:59:09.788465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:10.303 [2024-12-14 12:59:09.788472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:10.303 [2024-12-14 12:59:09.788478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:10.303 [2024-12-14 12:59:09.788485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:10.303 [2024-12-14 12:59:09.788492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:10.303 [2024-12-14 12:59:09.788499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:10.303 [2024-12-14 12:59:09.788507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:10.303 [2024-12-14 12:59:09.788514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:10.303 [2024-12-14 12:59:09.788521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:10.303 [2024-12-14 12:59:09.788528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:10.303 [2024-12-14 12:59:09.788535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:10.303 [2024-12-14 12:59:09.788542] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:10.303 [2024-12-14 12:59:09.788549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:10.303 [2024-12-14 12:59:09.788557] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:10.303 [2024-12-14 12:59:09.788565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:10.303 [2024-12-14 12:59:09.788572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:10.303 [2024-12-14 12:59:09.788579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:10.303 [2024-12-14 12:59:09.788587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.303 [2024-12-14 12:59:09.788594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:10.303 [2024-12-14 12:59:09.788601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:33:10.303 [2024-12-14 12:59:09.788608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.303 [2024-12-14 12:59:09.812295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.303 [2024-12-14 12:59:09.812326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:10.303 [2024-12-14 12:59:09.812335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.648 ms 00:33:10.303 [2024-12-14 12:59:09.812342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.303 [2024-12-14 12:59:09.812421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.303 [2024-12-14 12:59:09.812429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:10.303 [2024-12-14 12:59:09.812440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:33:10.303 [2024-12-14 12:59:09.812447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.303 [2024-12-14 12:59:09.852149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.303 [2024-12-14 12:59:09.852188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:10.303 [2024-12-14 12:59:09.852200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.655 ms 00:33:10.303 [2024-12-14 12:59:09.852208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.303 [2024-12-14 12:59:09.852250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.303 [2024-12-14 12:59:09.852260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:10.303 [2024-12-14 12:59:09.852268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:10.303 [2024-12-14 12:59:09.852276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.303 [2024-12-14 12:59:09.852368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.303 [2024-12-14 12:59:09.852378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:10.303 [2024-12-14 12:59:09.852386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:33:10.303 [2024-12-14 12:59:09.852393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.303 [2024-12-14 12:59:09.852505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.303 [2024-12-14 12:59:09.852515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:10.303 [2024-12-14 12:59:09.852523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:33:10.303 [2024-12-14 12:59:09.852530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.303 [2024-12-14 12:59:09.866140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.303 [2024-12-14 12:59:09.866278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:10.303 [2024-12-14 12:59:09.866294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.592 ms 00:33:10.303 [2024-12-14 12:59:09.866303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.303 [2024-12-14 12:59:09.866416] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:10.303 [2024-12-14 12:59:09.866429] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:10.303 [2024-12-14 12:59:09.866438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.303 [2024-12-14 12:59:09.866449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:10.303 [2024-12-14 12:59:09.866458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:33:10.303 [2024-12-14 12:59:09.866466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.303 [2024-12-14 12:59:09.878710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.303 [2024-12-14 12:59:09.878740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:10.304 [2024-12-14 12:59:09.878751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.230 ms 00:33:10.304 [2024-12-14 12:59:09.878758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.304 [2024-12-14 12:59:09.878870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.304 [2024-12-14 12:59:09.878878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:10.304 [2024-12-14 12:59:09.878887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:33:10.304 [2024-12-14 12:59:09.878897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.304 [2024-12-14 12:59:09.878960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.304 [2024-12-14 12:59:09.878970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:10.304 [2024-12-14 12:59:09.878984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:10.304 [2024-12-14 12:59:09.878991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.304 [2024-12-14 12:59:09.879575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.304 [2024-12-14 12:59:09.879589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:10.304 [2024-12-14 12:59:09.879597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:33:10.304 [2024-12-14 12:59:09.879604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.304 [2024-12-14 12:59:09.879623] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:10.304 [2024-12-14 12:59:09.879633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.304 [2024-12-14 12:59:09.879641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:10.304 [2024-12-14 12:59:09.879649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:10.304 [2024-12-14 12:59:09.879656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.304 [2024-12-14 12:59:09.891025] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:10.304 [2024-12-14 12:59:09.891178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.304 [2024-12-14 12:59:09.891189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:10.304 [2024-12-14 12:59:09.891199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.505 ms 00:33:10.304 [2024-12-14 12:59:09.891206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.304 [2024-12-14 12:59:09.893240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.304 [2024-12-14 12:59:09.893400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:10.304 [2024-12-14 12:59:09.893417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.015 ms 00:33:10.304 [2024-12-14 12:59:09.893425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.304 [2024-12-14 12:59:09.893511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.304 [2024-12-14 12:59:09.893521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:10.304 [2024-12-14 12:59:09.893530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:33:10.304 [2024-12-14 12:59:09.893537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.304 [2024-12-14 12:59:09.893558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.304 [2024-12-14 12:59:09.893572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:10.304 [2024-12-14 12:59:09.893580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:10.304 [2024-12-14 12:59:09.893587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.304 [2024-12-14 12:59:09.893616] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:10.304 [2024-12-14 12:59:09.893626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.304 [2024-12-14 12:59:09.893633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:10.304 [2024-12-14 12:59:09.893641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:33:10.304 [2024-12-14 12:59:09.893648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.304 [2024-12-14 12:59:09.919116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.304 [2024-12-14 12:59:09.919157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:10.304 [2024-12-14 12:59:09.919169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.447 ms 00:33:10.304 [2024-12-14 12:59:09.919177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.304 [2024-12-14 12:59:09.919250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:10.304 [2024-12-14 12:59:09.919259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:10.304 [2024-12-14 12:59:09.919267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:33:10.304 [2024-12-14 12:59:09.919275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:10.304 [2024-12-14 12:59:09.920311] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 137.915 ms, result 0 00:33:11.691  [2024-12-14T12:59:12.372Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-14T12:59:13.315Z] Copying: 23/1024 [MB] (12 MBps) [2024-12-14T12:59:14.309Z] Copying: 40/1024 [MB] (16 MBps) [2024-12-14T12:59:15.255Z] Copying: 57/1024 [MB] (17 MBps) [2024-12-14T12:59:16.199Z] Copying: 77/1024 [MB] (19 MBps) [2024-12-14T12:59:17.146Z] Copying: 90/1024 [MB] (13 MBps) [2024-12-14T12:59:18.535Z] Copying: 104/1024 [MB] (14 MBps) [2024-12-14T12:59:19.480Z] Copying: 116/1024 [MB] (11 MBps) [2024-12-14T12:59:20.425Z] Copying: 128/1024 [MB] (11 MBps) [2024-12-14T12:59:21.370Z] Copying: 151/1024 [MB] (22 MBps) [2024-12-14T12:59:22.315Z] Copying: 179/1024 [MB] (28 MBps) [2024-12-14T12:59:23.299Z] Copying: 191/1024 [MB] (12 MBps) [2024-12-14T12:59:24.247Z] Copying: 203/1024 [MB] (12 MBps) [2024-12-14T12:59:25.187Z] Copying: 218/1024 [MB] (14 MBps) [2024-12-14T12:59:26.121Z] Copying: 229/1024 [MB] (10 MBps) [2024-12-14T12:59:27.512Z] Copying: 241/1024 [MB] (12 MBps) [2024-12-14T12:59:28.453Z] Copying: 257/1024 [MB] (15 MBps) [2024-12-14T12:59:29.395Z] Copying: 281/1024 [MB] (23 MBps) [2024-12-14T12:59:30.335Z] Copying: 297/1024 [MB] (16 MBps) [2024-12-14T12:59:31.279Z] Copying: 319/1024 [MB] (22 MBps) [2024-12-14T12:59:32.223Z] Copying: 336/1024 [MB] (16 MBps) [2024-12-14T12:59:33.167Z] Copying: 351/1024 [MB] (15 MBps) [2024-12-14T12:59:34.110Z] Copying: 367/1024 [MB] (16 MBps) [2024-12-14T12:59:35.493Z] Copying: 385/1024 [MB] (17 MBps) [2024-12-14T12:59:36.433Z] Copying: 403/1024 [MB] (17 MBps) [2024-12-14T12:59:37.374Z] Copying: 419/1024 [MB] (16 MBps) [2024-12-14T12:59:38.314Z] Copying: 431/1024 [MB] (11 MBps) [2024-12-14T12:59:39.258Z] Copying: 443/1024 [MB] (11 MBps) [2024-12-14T12:59:40.201Z] Copying: 453/1024 [MB] (10 MBps) [2024-12-14T12:59:41.142Z] Copying: 465/1024 [MB] (11 MBps) [2024-12-14T12:59:42.526Z] Copying: 486/1024 [MB] (21 MBps) [2024-12-14T12:59:43.467Z] Copying: 502/1024 [MB] (15 MBps) [2024-12-14T12:59:44.410Z] Copying: 513/1024 [MB] (11 MBps) [2024-12-14T12:59:45.347Z] Copying: 524/1024 [MB] (10 MBps) [2024-12-14T12:59:46.287Z] Copying: 542/1024 [MB] (18 MBps) [2024-12-14T12:59:47.225Z] Copying: 556/1024 [MB] (13 MBps) [2024-12-14T12:59:48.165Z] Copying: 567/1024 [MB] (11 MBps) [2024-12-14T12:59:49.548Z] Copying: 582/1024 [MB] (14 MBps) [2024-12-14T12:59:50.119Z] Copying: 598/1024 [MB] (15 MBps) [2024-12-14T12:59:51.501Z] Copying: 614/1024 [MB] (16 MBps) [2024-12-14T12:59:52.471Z] Copying: 641/1024 [MB] (27 MBps) [2024-12-14T12:59:53.417Z] Copying: 670/1024 [MB] (29 MBps) [2024-12-14T12:59:54.361Z] Copying: 697/1024 [MB] (26 MBps) [2024-12-14T12:59:55.305Z] Copying: 719/1024 [MB] (22 MBps) [2024-12-14T12:59:56.248Z] Copying: 745/1024 [MB] (25 MBps) [2024-12-14T12:59:57.189Z] Copying: 766/1024 [MB] (20 MBps) [2024-12-14T12:59:58.132Z] Copying: 781/1024 [MB] (15 MBps) [2024-12-14T12:59:59.517Z] Copying: 793/1024 [MB] (11 MBps) [2024-12-14T13:00:00.461Z] Copying: 809/1024 [MB] (16 MBps) [2024-12-14T13:00:01.405Z] Copying: 830/1024 [MB] (20 MBps) [2024-12-14T13:00:02.349Z] Copying: 853/1024 [MB] (22 MBps) [2024-12-14T13:00:03.291Z] Copying: 878/1024 [MB] (25 MBps) [2024-12-14T13:00:04.234Z] Copying: 900/1024 [MB] (22 MBps) [2024-12-14T13:00:05.179Z] Copying: 917/1024 [MB] (17 MBps) [2024-12-14T13:00:06.121Z] Copying: 940/1024 [MB] (22 MBps) [2024-12-14T13:00:07.506Z] Copying: 954/1024 [MB] (14 MBps) [2024-12-14T13:00:08.448Z] Copying: 976/1024 [MB] (21 MBps) [2024-12-14T13:00:09.393Z] Copying: 995/1024 [MB] (19 MBps) [2024-12-14T13:00:10.338Z] Copying: 1008/1024 [MB] (12 MBps) [2024-12-14T13:00:10.600Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-14 13:00:10.515801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.863 [2024-12-14 13:00:10.515893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:10.863 [2024-12-14 13:00:10.515911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:10.863 [2024-12-14 13:00:10.515921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.863 [2024-12-14 13:00:10.515954] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:10.863 [2024-12-14 13:00:10.519510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.863 [2024-12-14 13:00:10.519546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:10.863 [2024-12-14 13:00:10.519558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.538 ms 00:34:10.863 [2024-12-14 13:00:10.519568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.863 [2024-12-14 13:00:10.519948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.863 [2024-12-14 13:00:10.519960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:10.863 [2024-12-14 13:00:10.519971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.351 ms 00:34:10.864 [2024-12-14 13:00:10.519979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.864 [2024-12-14 13:00:10.520015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.864 [2024-12-14 13:00:10.520025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:10.864 [2024-12-14 13:00:10.520034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:10.864 [2024-12-14 13:00:10.520043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.864 [2024-12-14 13:00:10.520114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.864 [2024-12-14 13:00:10.520125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:10.864 [2024-12-14 13:00:10.520134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:34:10.864 [2024-12-14 13:00:10.520142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.864 [2024-12-14 13:00:10.520157] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:10.864 [2024-12-14 13:00:10.520171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:10.864 [2024-12-14 13:00:10.520838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:10.865 [2024-12-14 13:00:10.520985] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:10.865 [2024-12-14 13:00:10.520993] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4449f4f-a8d5-417d-b7f9-195ffc128a42 00:34:10.865 [2024-12-14 13:00:10.521000] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:34:10.865 [2024-12-14 13:00:10.521008] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:34:10.865 [2024-12-14 13:00:10.521016] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:34:10.865 [2024-12-14 13:00:10.521024] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:34:10.865 [2024-12-14 13:00:10.521031] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:10.865 [2024-12-14 13:00:10.521039] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:10.865 [2024-12-14 13:00:10.521052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:10.865 [2024-12-14 13:00:10.521463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:10.865 [2024-12-14 13:00:10.521473] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:10.865 [2024-12-14 13:00:10.521482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.865 [2024-12-14 13:00:10.521491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:10.865 [2024-12-14 13:00:10.521501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.325 ms 00:34:10.865 [2024-12-14 13:00:10.521514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.865 [2024-12-14 13:00:10.536869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.865 [2024-12-14 13:00:10.537115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:10.865 [2024-12-14 13:00:10.537138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.331 ms 00:34:10.865 [2024-12-14 13:00:10.537147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.865 [2024-12-14 13:00:10.537558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:10.865 [2024-12-14 13:00:10.537570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:10.865 [2024-12-14 13:00:10.537587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:34:10.865 [2024-12-14 13:00:10.537595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.865 [2024-12-14 13:00:10.575092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:10.865 [2024-12-14 13:00:10.575246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:10.865 [2024-12-14 13:00:10.575310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:10.865 [2024-12-14 13:00:10.575337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.865 [2024-12-14 13:00:10.575429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:10.865 [2024-12-14 13:00:10.575453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:10.865 [2024-12-14 13:00:10.575483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:10.865 [2024-12-14 13:00:10.575504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.865 [2024-12-14 13:00:10.575585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:10.865 [2024-12-14 13:00:10.575610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:10.865 [2024-12-14 13:00:10.575632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:10.865 [2024-12-14 13:00:10.575696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:10.865 [2024-12-14 13:00:10.575732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:10.865 [2024-12-14 13:00:10.575752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:10.865 [2024-12-14 13:00:10.575811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:10.865 [2024-12-14 13:00:10.575840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.126 [2024-12-14 13:00:10.660306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.126 [2024-12-14 13:00:10.660492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:11.126 [2024-12-14 13:00:10.660551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.126 [2024-12-14 13:00:10.660575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.126 [2024-12-14 13:00:10.729052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.126 [2024-12-14 13:00:10.729256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:11.126 [2024-12-14 13:00:10.729314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.126 [2024-12-14 13:00:10.729345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.126 [2024-12-14 13:00:10.729454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.126 [2024-12-14 13:00:10.729480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:11.126 [2024-12-14 13:00:10.729501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.126 [2024-12-14 13:00:10.729520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.126 [2024-12-14 13:00:10.729574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.126 [2024-12-14 13:00:10.729596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:11.126 [2024-12-14 13:00:10.729617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.126 [2024-12-14 13:00:10.729713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.126 [2024-12-14 13:00:10.729824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.126 [2024-12-14 13:00:10.729939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:11.126 [2024-12-14 13:00:10.729999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.126 [2024-12-14 13:00:10.730021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.126 [2024-12-14 13:00:10.730082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.126 [2024-12-14 13:00:10.730106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:11.126 [2024-12-14 13:00:10.730127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.126 [2024-12-14 13:00:10.730146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.126 [2024-12-14 13:00:10.730205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.126 [2024-12-14 13:00:10.730228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:11.126 [2024-12-14 13:00:10.730248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.126 [2024-12-14 13:00:10.730267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.126 [2024-12-14 13:00:10.730366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:11.126 [2024-12-14 13:00:10.730393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:11.126 [2024-12-14 13:00:10.730413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:11.126 [2024-12-14 13:00:10.730432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:11.126 [2024-12-14 13:00:10.730581] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 214.760 ms, result 0 00:34:12.068 00:34:12.068 00:34:12.068 13:00:11 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:34:13.983 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:34:13.983 13:00:13 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:34:13.983 [2024-12-14 13:00:13.621855] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:13.983 [2024-12-14 13:00:13.621984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87352 ] 00:34:14.243 [2024-12-14 13:00:13.783300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.243 [2024-12-14 13:00:13.891022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.504 [2024-12-14 13:00:14.184686] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:14.504 [2024-12-14 13:00:14.184776] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:14.767 [2024-12-14 13:00:14.345945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.767 [2024-12-14 13:00:14.346012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:14.767 [2024-12-14 13:00:14.346028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:14.767 [2024-12-14 13:00:14.346038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.767 [2024-12-14 13:00:14.346116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.767 [2024-12-14 13:00:14.346129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:14.767 [2024-12-14 13:00:14.346139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:34:14.767 [2024-12-14 13:00:14.346147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.767 [2024-12-14 13:00:14.346168] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:14.767 [2024-12-14 13:00:14.346911] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:14.767 [2024-12-14 13:00:14.346943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.767 [2024-12-14 13:00:14.346952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:14.767 [2024-12-14 13:00:14.346963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:34:14.767 [2024-12-14 13:00:14.346971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.767 [2024-12-14 13:00:14.347283] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:14.767 [2024-12-14 13:00:14.347317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.767 [2024-12-14 13:00:14.347329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:14.767 [2024-12-14 13:00:14.347339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:34:14.767 [2024-12-14 13:00:14.347347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.767 [2024-12-14 13:00:14.347401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.767 [2024-12-14 13:00:14.347411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:14.767 [2024-12-14 13:00:14.347420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:34:14.767 [2024-12-14 13:00:14.347428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.767 [2024-12-14 13:00:14.347698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.767 [2024-12-14 13:00:14.347710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:14.767 [2024-12-14 13:00:14.347718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:34:14.767 [2024-12-14 13:00:14.347725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.767 [2024-12-14 13:00:14.347834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.767 [2024-12-14 13:00:14.347846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:14.767 [2024-12-14 13:00:14.347854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:14.767 [2024-12-14 13:00:14.347862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.767 [2024-12-14 13:00:14.347885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.767 [2024-12-14 13:00:14.347894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:14.767 [2024-12-14 13:00:14.347907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:14.767 [2024-12-14 13:00:14.347917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.767 [2024-12-14 13:00:14.347942] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:14.767 [2024-12-14 13:00:14.352173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.767 [2024-12-14 13:00:14.352213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:14.767 [2024-12-14 13:00:14.352225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.235 ms 00:34:14.767 [2024-12-14 13:00:14.352234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.767 [2024-12-14 13:00:14.352280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.767 [2024-12-14 13:00:14.352291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:14.767 [2024-12-14 13:00:14.352300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:34:14.767 [2024-12-14 13:00:14.352308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.767 [2024-12-14 13:00:14.352363] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:14.767 [2024-12-14 13:00:14.352388] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:14.768 [2024-12-14 13:00:14.352429] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:14.768 [2024-12-14 13:00:14.352446] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:14.768 [2024-12-14 13:00:14.352554] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:14.768 [2024-12-14 13:00:14.352566] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:14.768 [2024-12-14 13:00:14.352578] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:14.768 [2024-12-14 13:00:14.352589] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:14.768 [2024-12-14 13:00:14.352599] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:14.768 [2024-12-14 13:00:14.352611] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:14.768 [2024-12-14 13:00:14.352621] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:14.768 [2024-12-14 13:00:14.352630] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:14.768 [2024-12-14 13:00:14.352639] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:14.768 [2024-12-14 13:00:14.352648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.768 [2024-12-14 13:00:14.352656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:14.768 [2024-12-14 13:00:14.352665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:34:14.768 [2024-12-14 13:00:14.352672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.768 [2024-12-14 13:00:14.352755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.768 [2024-12-14 13:00:14.352764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:14.768 [2024-12-14 13:00:14.352772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:34:14.768 [2024-12-14 13:00:14.352782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.768 [2024-12-14 13:00:14.352889] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:14.768 [2024-12-14 13:00:14.352900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:14.768 [2024-12-14 13:00:14.352909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:14.768 [2024-12-14 13:00:14.352916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.768 [2024-12-14 13:00:14.352924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:14.768 [2024-12-14 13:00:14.352932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:14.768 [2024-12-14 13:00:14.352939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:14.768 [2024-12-14 13:00:14.352948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:14.768 [2024-12-14 13:00:14.352955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:14.768 [2024-12-14 13:00:14.352962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:14.768 [2024-12-14 13:00:14.352969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:14.768 [2024-12-14 13:00:14.352978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:14.768 [2024-12-14 13:00:14.352985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:14.768 [2024-12-14 13:00:14.352993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:14.768 [2024-12-14 13:00:14.353000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:14.768 [2024-12-14 13:00:14.353014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.768 [2024-12-14 13:00:14.353020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:14.768 [2024-12-14 13:00:14.353028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:14.768 [2024-12-14 13:00:14.353035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.768 [2024-12-14 13:00:14.353042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:14.768 [2024-12-14 13:00:14.353048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:14.768 [2024-12-14 13:00:14.353081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:14.768 [2024-12-14 13:00:14.353089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:14.768 [2024-12-14 13:00:14.353097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:14.768 [2024-12-14 13:00:14.353104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:14.768 [2024-12-14 13:00:14.353111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:14.768 [2024-12-14 13:00:14.353118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:14.768 [2024-12-14 13:00:14.353124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:14.768 [2024-12-14 13:00:14.353131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:14.768 [2024-12-14 13:00:14.353138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:14.768 [2024-12-14 13:00:14.353145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:14.768 [2024-12-14 13:00:14.353151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:14.768 [2024-12-14 13:00:14.353158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:14.768 [2024-12-14 13:00:14.353165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:14.768 [2024-12-14 13:00:14.353171] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:14.768 [2024-12-14 13:00:14.353178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:14.768 [2024-12-14 13:00:14.353185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:14.768 [2024-12-14 13:00:14.353192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:14.768 [2024-12-14 13:00:14.353199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:14.768 [2024-12-14 13:00:14.353205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.768 [2024-12-14 13:00:14.353212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:14.768 [2024-12-14 13:00:14.353219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:14.768 [2024-12-14 13:00:14.353226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.768 [2024-12-14 13:00:14.353235] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:14.768 [2024-12-14 13:00:14.353243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:14.768 [2024-12-14 13:00:14.353251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:14.768 [2024-12-14 13:00:14.353258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.768 [2024-12-14 13:00:14.353269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:14.768 [2024-12-14 13:00:14.353275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:14.768 [2024-12-14 13:00:14.353282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:14.768 [2024-12-14 13:00:14.353289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:14.768 [2024-12-14 13:00:14.353296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:14.768 [2024-12-14 13:00:14.353302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:14.768 [2024-12-14 13:00:14.353310] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:14.768 [2024-12-14 13:00:14.353319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:14.768 [2024-12-14 13:00:14.353329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:14.768 [2024-12-14 13:00:14.353336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:14.768 [2024-12-14 13:00:14.353344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:14.768 [2024-12-14 13:00:14.353351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:14.768 [2024-12-14 13:00:14.353358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:14.768 [2024-12-14 13:00:14.353365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:14.768 [2024-12-14 13:00:14.353371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:14.768 [2024-12-14 13:00:14.353378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:14.768 [2024-12-14 13:00:14.353385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:14.768 [2024-12-14 13:00:14.353392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:14.768 [2024-12-14 13:00:14.353424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:14.768 [2024-12-14 13:00:14.353432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:14.768 [2024-12-14 13:00:14.353440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:14.768 [2024-12-14 13:00:14.353448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:14.768 [2024-12-14 13:00:14.353455] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:14.768 [2024-12-14 13:00:14.353463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:14.768 [2024-12-14 13:00:14.353473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:14.768 [2024-12-14 13:00:14.353481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:14.768 [2024-12-14 13:00:14.353489] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:14.768 [2024-12-14 13:00:14.353496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:14.768 [2024-12-14 13:00:14.353505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.353513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:14.769 [2024-12-14 13:00:14.353522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:34:14.769 [2024-12-14 13:00:14.353530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.381236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.381426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:14.769 [2024-12-14 13:00:14.381833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.662 ms 00:34:14.769 [2024-12-14 13:00:14.381888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.382444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.382549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:14.769 [2024-12-14 13:00:14.382614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:14.769 [2024-12-14 13:00:14.382641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.423622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.423809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:14.769 [2024-12-14 13:00:14.423879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.877 ms 00:34:14.769 [2024-12-14 13:00:14.423903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.423973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.423999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:14.769 [2024-12-14 13:00:14.424020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:14.769 [2024-12-14 13:00:14.424040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.424187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.424329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:14.769 [2024-12-14 13:00:14.424355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:34:14.769 [2024-12-14 13:00:14.424375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.424526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.424606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:14.769 [2024-12-14 13:00:14.424632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:34:14.769 [2024-12-14 13:00:14.424651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.440221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.440372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:14.769 [2024-12-14 13:00:14.440428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.534 ms 00:34:14.769 [2024-12-14 13:00:14.440450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.440620] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:34:14.769 [2024-12-14 13:00:14.440661] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:14.769 [2024-12-14 13:00:14.440692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.440714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:14.769 [2024-12-14 13:00:14.440793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:34:14.769 [2024-12-14 13:00:14.440815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.453118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.453258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:14.769 [2024-12-14 13:00:14.453316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.268 ms 00:34:14.769 [2024-12-14 13:00:14.453338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.453494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.453519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:14.769 [2024-12-14 13:00:14.453539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:34:14.769 [2024-12-14 13:00:14.453565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.453628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.453740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:14.769 [2024-12-14 13:00:14.453770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:34:14.769 [2024-12-14 13:00:14.453790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.454401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.454509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:14.769 [2024-12-14 13:00:14.454562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:34:14.769 [2024-12-14 13:00:14.454584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.454627] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:14.769 [2024-12-14 13:00:14.454685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.454706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:14.769 [2024-12-14 13:00:14.454726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:34:14.769 [2024-12-14 13:00:14.454772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.467243] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:14.769 [2024-12-14 13:00:14.467515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.467548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:14.769 [2024-12-14 13:00:14.467610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.682 ms 00:34:14.769 [2024-12-14 13:00:14.467632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.469787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.469819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:14.769 [2024-12-14 13:00:14.469829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:34:14.769 [2024-12-14 13:00:14.469837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.469931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.469941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:14.769 [2024-12-14 13:00:14.469958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:14.769 [2024-12-14 13:00:14.469966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.469989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.470001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:14.769 [2024-12-14 13:00:14.470009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:14.769 [2024-12-14 13:00:14.470017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.470047] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:14.769 [2024-12-14 13:00:14.470080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.470089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:14.769 [2024-12-14 13:00:14.470097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:34:14.769 [2024-12-14 13:00:14.470104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.496491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.496544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:14.769 [2024-12-14 13:00:14.496559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.366 ms 00:34:14.769 [2024-12-14 13:00:14.496567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.496651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.769 [2024-12-14 13:00:14.496662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:14.769 [2024-12-14 13:00:14.496672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:34:14.769 [2024-12-14 13:00:14.496680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.769 [2024-12-14 13:00:14.497881] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.465 ms, result 0 00:34:16.155  [2024-12-14T13:00:16.836Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-14T13:00:17.780Z] Copying: 24/1024 [MB] (11 MBps) [2024-12-14T13:00:18.724Z] Copying: 41/1024 [MB] (16 MBps) [2024-12-14T13:00:19.668Z] Copying: 56/1024 [MB] (15 MBps) [2024-12-14T13:00:20.612Z] Copying: 69/1024 [MB] (13 MBps) [2024-12-14T13:00:21.623Z] Copying: 80/1024 [MB] (10 MBps) [2024-12-14T13:00:22.567Z] Copying: 92/1024 [MB] (11 MBps) [2024-12-14T13:00:23.952Z] Copying: 102/1024 [MB] (10 MBps) [2024-12-14T13:00:24.523Z] Copying: 118/1024 [MB] (16 MBps) [2024-12-14T13:00:25.909Z] Copying: 154/1024 [MB] (35 MBps) [2024-12-14T13:00:26.854Z] Copying: 173/1024 [MB] (18 MBps) [2024-12-14T13:00:27.798Z] Copying: 193/1024 [MB] (20 MBps) [2024-12-14T13:00:28.741Z] Copying: 224/1024 [MB] (31 MBps) [2024-12-14T13:00:29.684Z] Copying: 249/1024 [MB] (24 MBps) [2024-12-14T13:00:30.626Z] Copying: 260/1024 [MB] (11 MBps) [2024-12-14T13:00:31.570Z] Copying: 280/1024 [MB] (19 MBps) [2024-12-14T13:00:32.512Z] Copying: 299/1024 [MB] (19 MBps) [2024-12-14T13:00:33.897Z] Copying: 318/1024 [MB] (18 MBps) [2024-12-14T13:00:34.839Z] Copying: 340/1024 [MB] (21 MBps) [2024-12-14T13:00:35.781Z] Copying: 359/1024 [MB] (19 MBps) [2024-12-14T13:00:36.723Z] Copying: 378/1024 [MB] (18 MBps) [2024-12-14T13:00:37.668Z] Copying: 404/1024 [MB] (25 MBps) [2024-12-14T13:00:38.612Z] Copying: 415/1024 [MB] (11 MBps) [2024-12-14T13:00:39.555Z] Copying: 439/1024 [MB] (23 MBps) [2024-12-14T13:00:40.940Z] Copying: 463/1024 [MB] (23 MBps) [2024-12-14T13:00:41.513Z] Copying: 475/1024 [MB] (12 MBps) [2024-12-14T13:00:42.899Z] Copying: 497/1024 [MB] (21 MBps) [2024-12-14T13:00:43.843Z] Copying: 515/1024 [MB] (17 MBps) [2024-12-14T13:00:44.786Z] Copying: 529/1024 [MB] (14 MBps) [2024-12-14T13:00:45.726Z] Copying: 556/1024 [MB] (26 MBps) [2024-12-14T13:00:46.670Z] Copying: 602/1024 [MB] (45 MBps) [2024-12-14T13:00:47.612Z] Copying: 627/1024 [MB] (25 MBps) [2024-12-14T13:00:48.557Z] Copying: 640/1024 [MB] (13 MBps) [2024-12-14T13:00:49.943Z] Copying: 684/1024 [MB] (44 MBps) [2024-12-14T13:00:50.581Z] Copying: 724/1024 [MB] (39 MBps) [2024-12-14T13:00:51.525Z] Copying: 775/1024 [MB] (50 MBps) [2024-12-14T13:00:52.909Z] Copying: 799/1024 [MB] (24 MBps) [2024-12-14T13:00:53.851Z] Copying: 812/1024 [MB] (12 MBps) [2024-12-14T13:00:54.792Z] Copying: 851/1024 [MB] (38 MBps) [2024-12-14T13:00:55.735Z] Copying: 895/1024 [MB] (43 MBps) [2024-12-14T13:00:56.677Z] Copying: 920/1024 [MB] (25 MBps) [2024-12-14T13:00:57.620Z] Copying: 933/1024 [MB] (12 MBps) [2024-12-14T13:00:58.563Z] Copying: 948/1024 [MB] (14 MBps) [2024-12-14T13:00:59.950Z] Copying: 958/1024 [MB] (10 MBps) [2024-12-14T13:01:00.524Z] Copying: 969/1024 [MB] (10 MBps) [2024-12-14T13:01:01.911Z] Copying: 981/1024 [MB] (12 MBps) [2024-12-14T13:01:02.854Z] Copying: 1012/1024 [MB] (31 MBps) [2024-12-14T13:01:03.116Z] Copying: 1023/1024 [MB] (11 MBps) [2024-12-14T13:01:03.116Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-12-14 13:01:02.945491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.379 [2024-12-14 13:01:02.945589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:03.379 [2024-12-14 13:01:02.945608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:35:03.379 [2024-12-14 13:01:02.945618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.379 [2024-12-14 13:01:02.947543] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:03.379 [2024-12-14 13:01:02.954456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.379 [2024-12-14 13:01:02.954643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:03.379 [2024-12-14 13:01:02.954860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.812 ms 00:35:03.379 [2024-12-14 13:01:02.954886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.379 [2024-12-14 13:01:02.965087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.379 [2024-12-14 13:01:02.965264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:03.379 [2024-12-14 13:01:02.965333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.805 ms 00:35:03.379 [2024-12-14 13:01:02.965359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.379 [2024-12-14 13:01:02.965424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.379 [2024-12-14 13:01:02.965449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:03.379 [2024-12-14 13:01:02.965471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:03.379 [2024-12-14 13:01:02.965492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.379 [2024-12-14 13:01:02.965565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.379 [2024-12-14 13:01:02.965780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:03.379 [2024-12-14 13:01:02.965808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:35:03.379 [2024-12-14 13:01:02.965827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.379 [2024-12-14 13:01:02.965860] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:03.379 [2024-12-14 13:01:02.965960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128256 / 261120 wr_cnt: 1 state: open 00:35:03.379 [2024-12-14 13:01:02.965999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.966999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.967007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.967015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.967023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.967030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.967039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:03.379 [2024-12-14 13:01:02.967047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:03.380 [2024-12-14 13:01:02.967410] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:03.380 [2024-12-14 13:01:02.967418] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4449f4f-a8d5-417d-b7f9-195ffc128a42 00:35:03.380 [2024-12-14 13:01:02.967426] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128256 00:35:03.380 [2024-12-14 13:01:02.967433] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128288 00:35:03.380 [2024-12-14 13:01:02.967440] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128256 00:35:03.380 [2024-12-14 13:01:02.967450] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:35:03.380 [2024-12-14 13:01:02.967462] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:03.380 [2024-12-14 13:01:02.967478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:03.380 [2024-12-14 13:01:02.967486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:03.380 [2024-12-14 13:01:02.967493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:03.380 [2024-12-14 13:01:02.967500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:03.380 [2024-12-14 13:01:02.967509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.380 [2024-12-14 13:01:02.967518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:03.380 [2024-12-14 13:01:02.967528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.650 ms 00:35:03.380 [2024-12-14 13:01:02.967535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.380 [2024-12-14 13:01:02.981402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.380 [2024-12-14 13:01:02.981609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:03.380 [2024-12-14 13:01:02.981637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.841 ms 00:35:03.380 [2024-12-14 13:01:02.981646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.380 [2024-12-14 13:01:02.982088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:03.380 [2024-12-14 13:01:02.982103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:03.380 [2024-12-14 13:01:02.982114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:35:03.380 [2024-12-14 13:01:02.982122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.380 [2024-12-14 13:01:03.019134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:03.380 [2024-12-14 13:01:03.019191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:03.380 [2024-12-14 13:01:03.019203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:03.380 [2024-12-14 13:01:03.019211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.380 [2024-12-14 13:01:03.019277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:03.380 [2024-12-14 13:01:03.019286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:03.380 [2024-12-14 13:01:03.019294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:03.380 [2024-12-14 13:01:03.019302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.380 [2024-12-14 13:01:03.019367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:03.380 [2024-12-14 13:01:03.019378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:03.380 [2024-12-14 13:01:03.019391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:03.380 [2024-12-14 13:01:03.019399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.380 [2024-12-14 13:01:03.019416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:03.380 [2024-12-14 13:01:03.019425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:03.380 [2024-12-14 13:01:03.019432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:03.380 [2024-12-14 13:01:03.019441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.380 [2024-12-14 13:01:03.104189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:03.380 [2024-12-14 13:01:03.104421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:03.380 [2024-12-14 13:01:03.104444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:03.380 [2024-12-14 13:01:03.104453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.641 [2024-12-14 13:01:03.175007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:03.641 [2024-12-14 13:01:03.175101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:03.641 [2024-12-14 13:01:03.175114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:03.641 [2024-12-14 13:01:03.175123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.641 [2024-12-14 13:01:03.175226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:03.641 [2024-12-14 13:01:03.175237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:03.641 [2024-12-14 13:01:03.175247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:03.641 [2024-12-14 13:01:03.175259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.641 [2024-12-14 13:01:03.175296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:03.641 [2024-12-14 13:01:03.175306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:03.641 [2024-12-14 13:01:03.175315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:03.641 [2024-12-14 13:01:03.175323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.641 [2024-12-14 13:01:03.175401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:03.641 [2024-12-14 13:01:03.175411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:03.641 [2024-12-14 13:01:03.175421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:03.641 [2024-12-14 13:01:03.175429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.641 [2024-12-14 13:01:03.175459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:03.641 [2024-12-14 13:01:03.175468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:03.641 [2024-12-14 13:01:03.175476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:03.641 [2024-12-14 13:01:03.175484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.641 [2024-12-14 13:01:03.175526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:03.641 [2024-12-14 13:01:03.175535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:03.641 [2024-12-14 13:01:03.175543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:03.641 [2024-12-14 13:01:03.175551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.641 [2024-12-14 13:01:03.175602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:03.641 [2024-12-14 13:01:03.175613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:03.641 [2024-12-14 13:01:03.175621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:03.641 [2024-12-14 13:01:03.175629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:03.641 [2024-12-14 13:01:03.175761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 232.374 ms, result 0 00:35:05.027 00:35:05.027 00:35:05.027 13:01:04 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:35:05.288 [2024-12-14 13:01:04.823714] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:35:05.288 [2024-12-14 13:01:04.824807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87857 ] 00:35:05.288 [2024-12-14 13:01:04.999472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.552 [2024-12-14 13:01:05.100663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.814 [2024-12-14 13:01:05.394016] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:05.814 [2024-12-14 13:01:05.394130] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:35:06.076 [2024-12-14 13:01:05.555878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.076 [2024-12-14 13:01:05.555948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:35:06.076 [2024-12-14 13:01:05.555965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:06.076 [2024-12-14 13:01:05.555974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.076 [2024-12-14 13:01:05.556035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.076 [2024-12-14 13:01:05.556049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:06.076 [2024-12-14 13:01:05.556083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:35:06.076 [2024-12-14 13:01:05.556093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.076 [2024-12-14 13:01:05.556115] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:35:06.076 [2024-12-14 13:01:05.556883] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:35:06.076 [2024-12-14 13:01:05.556910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.076 [2024-12-14 13:01:05.556919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:06.076 [2024-12-14 13:01:05.556929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.801 ms 00:35:06.076 [2024-12-14 13:01:05.556936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.076 [2024-12-14 13:01:05.557259] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:35:06.076 [2024-12-14 13:01:05.557288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.076 [2024-12-14 13:01:05.557299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:35:06.076 [2024-12-14 13:01:05.557309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:35:06.076 [2024-12-14 13:01:05.557318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.076 [2024-12-14 13:01:05.557374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.076 [2024-12-14 13:01:05.557384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:35:06.076 [2024-12-14 13:01:05.557392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:35:06.076 [2024-12-14 13:01:05.557399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.076 [2024-12-14 13:01:05.557716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.076 [2024-12-14 13:01:05.557729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:06.076 [2024-12-14 13:01:05.557738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:35:06.076 [2024-12-14 13:01:05.557746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.076 [2024-12-14 13:01:05.557860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.076 [2024-12-14 13:01:05.557871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:06.076 [2024-12-14 13:01:05.557880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:35:06.076 [2024-12-14 13:01:05.557888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.076 [2024-12-14 13:01:05.557912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.076 [2024-12-14 13:01:05.557921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:35:06.076 [2024-12-14 13:01:05.557932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:35:06.076 [2024-12-14 13:01:05.557940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.076 [2024-12-14 13:01:05.557962] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:35:06.076 [2024-12-14 13:01:05.562332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.076 [2024-12-14 13:01:05.562375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:06.076 [2024-12-14 13:01:05.562386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.375 ms 00:35:06.076 [2024-12-14 13:01:05.562394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.076 [2024-12-14 13:01:05.562438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.076 [2024-12-14 13:01:05.562446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:35:06.076 [2024-12-14 13:01:05.562455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:35:06.076 [2024-12-14 13:01:05.562463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.076 [2024-12-14 13:01:05.562523] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:35:06.076 [2024-12-14 13:01:05.562548] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:35:06.076 [2024-12-14 13:01:05.562587] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:35:06.076 [2024-12-14 13:01:05.562603] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:35:06.076 [2024-12-14 13:01:05.562709] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:35:06.076 [2024-12-14 13:01:05.562720] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:35:06.076 [2024-12-14 13:01:05.562730] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:35:06.076 [2024-12-14 13:01:05.562741] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:35:06.076 [2024-12-14 13:01:05.562750] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:35:06.076 [2024-12-14 13:01:05.562761] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:35:06.076 [2024-12-14 13:01:05.562769] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:35:06.076 [2024-12-14 13:01:05.562778] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:35:06.076 [2024-12-14 13:01:05.562787] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:35:06.076 [2024-12-14 13:01:05.562795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.076 [2024-12-14 13:01:05.562803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:35:06.076 [2024-12-14 13:01:05.562811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:35:06.076 [2024-12-14 13:01:05.562818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.076 [2024-12-14 13:01:05.562905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.076 [2024-12-14 13:01:05.562914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:35:06.076 [2024-12-14 13:01:05.562921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:35:06.076 [2024-12-14 13:01:05.562931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.076 [2024-12-14 13:01:05.563028] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:35:06.076 [2024-12-14 13:01:05.563038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:35:06.076 [2024-12-14 13:01:05.563046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:06.076 [2024-12-14 13:01:05.563084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:06.076 [2024-12-14 13:01:05.563094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:35:06.076 [2024-12-14 13:01:05.563101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:35:06.076 [2024-12-14 13:01:05.563109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:35:06.076 [2024-12-14 13:01:05.563117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:35:06.076 [2024-12-14 13:01:05.563124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:35:06.076 [2024-12-14 13:01:05.563131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:06.077 [2024-12-14 13:01:05.563139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:35:06.077 [2024-12-14 13:01:05.563147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:35:06.077 [2024-12-14 13:01:05.563153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:35:06.077 [2024-12-14 13:01:05.563160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:35:06.077 [2024-12-14 13:01:05.563167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:35:06.077 [2024-12-14 13:01:05.563182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:06.077 [2024-12-14 13:01:05.563188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:35:06.077 [2024-12-14 13:01:05.563195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:35:06.077 [2024-12-14 13:01:05.563202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:06.077 [2024-12-14 13:01:05.563210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:35:06.077 [2024-12-14 13:01:05.563218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:35:06.077 [2024-12-14 13:01:05.563225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:06.077 [2024-12-14 13:01:05.563232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:35:06.077 [2024-12-14 13:01:05.563240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:35:06.077 [2024-12-14 13:01:05.563246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:06.077 [2024-12-14 13:01:05.563253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:35:06.077 [2024-12-14 13:01:05.563260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:35:06.077 [2024-12-14 13:01:05.563266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:06.077 [2024-12-14 13:01:05.563273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:35:06.077 [2024-12-14 13:01:05.563279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:35:06.077 [2024-12-14 13:01:05.563286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:35:06.077 [2024-12-14 13:01:05.563293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:35:06.077 [2024-12-14 13:01:05.563302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:35:06.077 [2024-12-14 13:01:05.563309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:06.077 [2024-12-14 13:01:05.563315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:35:06.077 [2024-12-14 13:01:05.563322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:35:06.077 [2024-12-14 13:01:05.563329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:35:06.077 [2024-12-14 13:01:05.563336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:35:06.077 [2024-12-14 13:01:05.563343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:35:06.077 [2024-12-14 13:01:05.563350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:06.077 [2024-12-14 13:01:05.563357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:35:06.077 [2024-12-14 13:01:05.563364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:35:06.077 [2024-12-14 13:01:05.563372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:06.077 [2024-12-14 13:01:05.563379] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:35:06.077 [2024-12-14 13:01:05.563387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:35:06.077 [2024-12-14 13:01:05.563395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:35:06.077 [2024-12-14 13:01:05.563402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:35:06.077 [2024-12-14 13:01:05.563414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:35:06.077 [2024-12-14 13:01:05.563422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:35:06.077 [2024-12-14 13:01:05.563428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:35:06.077 [2024-12-14 13:01:05.563435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:35:06.077 [2024-12-14 13:01:05.563443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:35:06.077 [2024-12-14 13:01:05.563450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:35:06.077 [2024-12-14 13:01:05.563459] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:35:06.077 [2024-12-14 13:01:05.563469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:06.077 [2024-12-14 13:01:05.563479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:35:06.077 [2024-12-14 13:01:05.563487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:35:06.077 [2024-12-14 13:01:05.563494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:35:06.077 [2024-12-14 13:01:05.563501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:35:06.077 [2024-12-14 13:01:05.563508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:35:06.077 [2024-12-14 13:01:05.563516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:35:06.077 [2024-12-14 13:01:05.563523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:35:06.077 [2024-12-14 13:01:05.563530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:35:06.077 [2024-12-14 13:01:05.563537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:35:06.077 [2024-12-14 13:01:05.563544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:35:06.077 [2024-12-14 13:01:05.563551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:35:06.077 [2024-12-14 13:01:05.563560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:35:06.077 [2024-12-14 13:01:05.563567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:35:06.077 [2024-12-14 13:01:05.563574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:35:06.077 [2024-12-14 13:01:05.563582] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:35:06.077 [2024-12-14 13:01:05.563590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:35:06.077 [2024-12-14 13:01:05.563599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:35:06.077 [2024-12-14 13:01:05.563606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:35:06.077 [2024-12-14 13:01:05.563614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:35:06.077 [2024-12-14 13:01:05.563622] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:35:06.077 [2024-12-14 13:01:05.563629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.077 [2024-12-14 13:01:05.563638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:35:06.077 [2024-12-14 13:01:05.563646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:35:06.077 [2024-12-14 13:01:05.563653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.077 [2024-12-14 13:01:05.591962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.077 [2024-12-14 13:01:05.592203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:06.077 [2024-12-14 13:01:05.592225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.266 ms 00:35:06.077 [2024-12-14 13:01:05.592235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.077 [2024-12-14 13:01:05.592331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.077 [2024-12-14 13:01:05.592341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:35:06.077 [2024-12-14 13:01:05.592357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:35:06.077 [2024-12-14 13:01:05.592364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.077 [2024-12-14 13:01:05.638131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.077 [2024-12-14 13:01:05.638187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:06.077 [2024-12-14 13:01:05.638202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.704 ms 00:35:06.077 [2024-12-14 13:01:05.638210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.077 [2024-12-14 13:01:05.638267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.077 [2024-12-14 13:01:05.638277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:06.077 [2024-12-14 13:01:05.638287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:06.077 [2024-12-14 13:01:05.638295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.077 [2024-12-14 13:01:05.638412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.077 [2024-12-14 13:01:05.638424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:06.077 [2024-12-14 13:01:05.638434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:35:06.077 [2024-12-14 13:01:05.638442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.077 [2024-12-14 13:01:05.638570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.077 [2024-12-14 13:01:05.638583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:06.077 [2024-12-14 13:01:05.638592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:35:06.077 [2024-12-14 13:01:05.638599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.077 [2024-12-14 13:01:05.654587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.077 [2024-12-14 13:01:05.654633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:06.077 [2024-12-14 13:01:05.654645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.967 ms 00:35:06.077 [2024-12-14 13:01:05.654654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.077 [2024-12-14 13:01:05.654816] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:35:06.077 [2024-12-14 13:01:05.654830] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:35:06.077 [2024-12-14 13:01:05.654840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.077 [2024-12-14 13:01:05.654852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:35:06.077 [2024-12-14 13:01:05.654861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:35:06.078 [2024-12-14 13:01:05.654869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.078 [2024-12-14 13:01:05.667340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.078 [2024-12-14 13:01:05.667387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:35:06.078 [2024-12-14 13:01:05.667398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.450 ms 00:35:06.078 [2024-12-14 13:01:05.667406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.078 [2024-12-14 13:01:05.667530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.078 [2024-12-14 13:01:05.667541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:35:06.078 [2024-12-14 13:01:05.667550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:35:06.078 [2024-12-14 13:01:05.667563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.078 [2024-12-14 13:01:05.667620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.078 [2024-12-14 13:01:05.667629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:35:06.078 [2024-12-14 13:01:05.667638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:35:06.078 [2024-12-14 13:01:05.667653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.078 [2024-12-14 13:01:05.668272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.078 [2024-12-14 13:01:05.668288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:35:06.078 [2024-12-14 13:01:05.668298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:35:06.078 [2024-12-14 13:01:05.668306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.078 [2024-12-14 13:01:05.668331] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:35:06.078 [2024-12-14 13:01:05.668342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.078 [2024-12-14 13:01:05.668351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:35:06.078 [2024-12-14 13:01:05.668360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:35:06.078 [2024-12-14 13:01:05.668368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.078 [2024-12-14 13:01:05.681042] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:35:06.078 [2024-12-14 13:01:05.681223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.078 [2024-12-14 13:01:05.681235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:35:06.078 [2024-12-14 13:01:05.681246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.834 ms 00:35:06.078 [2024-12-14 13:01:05.681255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.078 [2024-12-14 13:01:05.683564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.078 [2024-12-14 13:01:05.683599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:35:06.078 [2024-12-14 13:01:05.683609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.282 ms 00:35:06.078 [2024-12-14 13:01:05.683617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.078 [2024-12-14 13:01:05.683701] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:35:06.078 [2024-12-14 13:01:05.684186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.078 [2024-12-14 13:01:05.684198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:35:06.078 [2024-12-14 13:01:05.684208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.507 ms 00:35:06.078 [2024-12-14 13:01:05.684216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.078 [2024-12-14 13:01:05.684247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.078 [2024-12-14 13:01:05.684258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:35:06.078 [2024-12-14 13:01:05.684266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:06.078 [2024-12-14 13:01:05.684274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.078 [2024-12-14 13:01:05.684308] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:35:06.078 [2024-12-14 13:01:05.684318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.078 [2024-12-14 13:01:05.684327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:35:06.078 [2024-12-14 13:01:05.684335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:35:06.078 [2024-12-14 13:01:05.684344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.078 [2024-12-14 13:01:05.711577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.078 [2024-12-14 13:01:05.711633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:35:06.078 [2024-12-14 13:01:05.711648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.215 ms 00:35:06.078 [2024-12-14 13:01:05.711656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.078 [2024-12-14 13:01:05.711745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:06.078 [2024-12-14 13:01:05.711756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:35:06.078 [2024-12-14 13:01:05.711766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:35:06.078 [2024-12-14 13:01:05.711774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:06.078 [2024-12-14 13:01:05.713011] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.676 ms, result 0 00:35:07.463  [2024-12-14T13:01:08.142Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-14T13:01:09.086Z] Copying: 23/1024 [MB] (10 MBps) [2024-12-14T13:01:10.030Z] Copying: 34/1024 [MB] (11 MBps) [2024-12-14T13:01:10.972Z] Copying: 48/1024 [MB] (13 MBps) [2024-12-14T13:01:12.358Z] Copying: 58/1024 [MB] (10 MBps) [2024-12-14T13:01:12.931Z] Copying: 71/1024 [MB] (12 MBps) [2024-12-14T13:01:14.317Z] Copying: 87/1024 [MB] (15 MBps) [2024-12-14T13:01:15.264Z] Copying: 102/1024 [MB] (15 MBps) [2024-12-14T13:01:16.207Z] Copying: 117/1024 [MB] (14 MBps) [2024-12-14T13:01:17.150Z] Copying: 129/1024 [MB] (12 MBps) [2024-12-14T13:01:18.092Z] Copying: 142/1024 [MB] (12 MBps) [2024-12-14T13:01:19.091Z] Copying: 156/1024 [MB] (13 MBps) [2024-12-14T13:01:20.034Z] Copying: 168/1024 [MB] (12 MBps) [2024-12-14T13:01:20.978Z] Copying: 184/1024 [MB] (15 MBps) [2024-12-14T13:01:21.927Z] Copying: 198/1024 [MB] (14 MBps) [2024-12-14T13:01:23.313Z] Copying: 211/1024 [MB] (12 MBps) [2024-12-14T13:01:24.257Z] Copying: 222/1024 [MB] (11 MBps) [2024-12-14T13:01:25.200Z] Copying: 234/1024 [MB] (12 MBps) [2024-12-14T13:01:26.143Z] Copying: 250/1024 [MB] (15 MBps) [2024-12-14T13:01:27.085Z] Copying: 260/1024 [MB] (10 MBps) [2024-12-14T13:01:28.028Z] Copying: 279/1024 [MB] (18 MBps) [2024-12-14T13:01:28.972Z] Copying: 303/1024 [MB] (23 MBps) [2024-12-14T13:01:30.359Z] Copying: 317/1024 [MB] (14 MBps) [2024-12-14T13:01:30.933Z] Copying: 344/1024 [MB] (26 MBps) [2024-12-14T13:01:32.320Z] Copying: 368/1024 [MB] (23 MBps) [2024-12-14T13:01:33.264Z] Copying: 386/1024 [MB] (17 MBps) [2024-12-14T13:01:34.207Z] Copying: 403/1024 [MB] (17 MBps) [2024-12-14T13:01:35.151Z] Copying: 420/1024 [MB] (17 MBps) [2024-12-14T13:01:36.093Z] Copying: 442/1024 [MB] (21 MBps) [2024-12-14T13:01:37.037Z] Copying: 464/1024 [MB] (21 MBps) [2024-12-14T13:01:37.981Z] Copying: 485/1024 [MB] (21 MBps) [2024-12-14T13:01:38.925Z] Copying: 501/1024 [MB] (16 MBps) [2024-12-14T13:01:40.314Z] Copying: 519/1024 [MB] (18 MBps) [2024-12-14T13:01:41.257Z] Copying: 533/1024 [MB] (13 MBps) [2024-12-14T13:01:42.200Z] Copying: 544/1024 [MB] (11 MBps) [2024-12-14T13:01:43.144Z] Copying: 560/1024 [MB] (15 MBps) [2024-12-14T13:01:44.087Z] Copying: 576/1024 [MB] (15 MBps) [2024-12-14T13:01:45.030Z] Copying: 595/1024 [MB] (19 MBps) [2024-12-14T13:01:45.972Z] Copying: 614/1024 [MB] (19 MBps) [2024-12-14T13:01:47.360Z] Copying: 631/1024 [MB] (16 MBps) [2024-12-14T13:01:47.956Z] Copying: 649/1024 [MB] (18 MBps) [2024-12-14T13:01:48.941Z] Copying: 662/1024 [MB] (12 MBps) [2024-12-14T13:01:50.327Z] Copying: 679/1024 [MB] (17 MBps) [2024-12-14T13:01:51.275Z] Copying: 691/1024 [MB] (11 MBps) [2024-12-14T13:01:52.219Z] Copying: 709/1024 [MB] (18 MBps) [2024-12-14T13:01:53.163Z] Copying: 734/1024 [MB] (25 MBps) [2024-12-14T13:01:54.107Z] Copying: 750/1024 [MB] (15 MBps) [2024-12-14T13:01:55.053Z] Copying: 788/1024 [MB] (38 MBps) [2024-12-14T13:01:55.997Z] Copying: 801/1024 [MB] (13 MBps) [2024-12-14T13:01:56.941Z] Copying: 814/1024 [MB] (12 MBps) [2024-12-14T13:01:58.330Z] Copying: 829/1024 [MB] (15 MBps) [2024-12-14T13:01:59.276Z] Copying: 843/1024 [MB] (13 MBps) [2024-12-14T13:02:00.220Z] Copying: 854/1024 [MB] (11 MBps) [2024-12-14T13:02:01.164Z] Copying: 872/1024 [MB] (17 MBps) [2024-12-14T13:02:02.106Z] Copying: 885/1024 [MB] (12 MBps) [2024-12-14T13:02:03.049Z] Copying: 902/1024 [MB] (17 MBps) [2024-12-14T13:02:03.993Z] Copying: 917/1024 [MB] (15 MBps) [2024-12-14T13:02:04.936Z] Copying: 929/1024 [MB] (11 MBps) [2024-12-14T13:02:06.320Z] Copying: 940/1024 [MB] (11 MBps) [2024-12-14T13:02:07.260Z] Copying: 951/1024 [MB] (11 MBps) [2024-12-14T13:02:08.204Z] Copying: 962/1024 [MB] (11 MBps) [2024-12-14T13:02:09.149Z] Copying: 973/1024 [MB] (10 MBps) [2024-12-14T13:02:10.093Z] Copying: 987/1024 [MB] (13 MBps) [2024-12-14T13:02:11.038Z] Copying: 1003/1024 [MB] (16 MBps) [2024-12-14T13:02:11.298Z] Copying: 1021/1024 [MB] (18 MBps) [2024-12-14T13:02:11.561Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-14 13:02:11.298584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.824 [2024-12-14 13:02:11.298710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:11.824 [2024-12-14 13:02:11.298740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:36:11.824 [2024-12-14 13:02:11.298758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.825 [2024-12-14 13:02:11.298811] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:11.825 [2024-12-14 13:02:11.304661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.825 [2024-12-14 13:02:11.304726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:11.825 [2024-12-14 13:02:11.304747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.818 ms 00:36:11.825 [2024-12-14 13:02:11.304775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.825 [2024-12-14 13:02:11.305288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.825 [2024-12-14 13:02:11.305324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:11.825 [2024-12-14 13:02:11.305345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:36:11.825 [2024-12-14 13:02:11.305361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.825 [2024-12-14 13:02:11.305465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.825 [2024-12-14 13:02:11.305486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:11.825 [2024-12-14 13:02:11.305504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:36:11.825 [2024-12-14 13:02:11.305519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.825 [2024-12-14 13:02:11.305608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.825 [2024-12-14 13:02:11.305621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:11.825 [2024-12-14 13:02:11.305630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:36:11.825 [2024-12-14 13:02:11.305639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.825 [2024-12-14 13:02:11.305654] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:11.825 [2024-12-14 13:02:11.305669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:36:11.825 [2024-12-14 13:02:11.305679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.305997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:11.825 [2024-12-14 13:02:11.306301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:11.826 [2024-12-14 13:02:11.306493] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:11.826 [2024-12-14 13:02:11.306502] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a4449f4f-a8d5-417d-b7f9-195ffc128a42 00:36:11.826 [2024-12-14 13:02:11.306512] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:36:11.826 [2024-12-14 13:02:11.306520] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 2848 00:36:11.826 [2024-12-14 13:02:11.306532] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 2816 00:36:11.826 [2024-12-14 13:02:11.306541] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0114 00:36:11.826 [2024-12-14 13:02:11.306551] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:11.826 [2024-12-14 13:02:11.306560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:11.826 [2024-12-14 13:02:11.306568] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:11.826 [2024-12-14 13:02:11.306575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:11.826 [2024-12-14 13:02:11.306581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:11.826 [2024-12-14 13:02:11.306588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.826 [2024-12-14 13:02:11.306600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:11.826 [2024-12-14 13:02:11.306607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:36:11.826 [2024-12-14 13:02:11.306615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.321166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.826 [2024-12-14 13:02:11.321216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:11.826 [2024-12-14 13:02:11.321236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.535 ms 00:36:11.826 [2024-12-14 13:02:11.321246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.321690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:11.826 [2024-12-14 13:02:11.321709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:11.826 [2024-12-14 13:02:11.321718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:36:11.826 [2024-12-14 13:02:11.321727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.360883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.826 [2024-12-14 13:02:11.361219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:11.826 [2024-12-14 13:02:11.361242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.826 [2024-12-14 13:02:11.361252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.361326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.826 [2024-12-14 13:02:11.361336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:11.826 [2024-12-14 13:02:11.361345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.826 [2024-12-14 13:02:11.361354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.361416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.826 [2024-12-14 13:02:11.361453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:11.826 [2024-12-14 13:02:11.361463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.826 [2024-12-14 13:02:11.361471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.361490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.826 [2024-12-14 13:02:11.361499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:11.826 [2024-12-14 13:02:11.361508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.826 [2024-12-14 13:02:11.361516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.453910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.826 [2024-12-14 13:02:11.453973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:11.826 [2024-12-14 13:02:11.453988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.826 [2024-12-14 13:02:11.453998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.528491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.826 [2024-12-14 13:02:11.528802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:11.826 [2024-12-14 13:02:11.528827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.826 [2024-12-14 13:02:11.528837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.528958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.826 [2024-12-14 13:02:11.528969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:11.826 [2024-12-14 13:02:11.528985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.826 [2024-12-14 13:02:11.528995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.529037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.826 [2024-12-14 13:02:11.529048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:11.826 [2024-12-14 13:02:11.529087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.826 [2024-12-14 13:02:11.529096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.529197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.826 [2024-12-14 13:02:11.529208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:11.826 [2024-12-14 13:02:11.529221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.826 [2024-12-14 13:02:11.529236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.529265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.826 [2024-12-14 13:02:11.529276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:11.826 [2024-12-14 13:02:11.529286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.826 [2024-12-14 13:02:11.529295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.529346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.826 [2024-12-14 13:02:11.529358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:11.826 [2024-12-14 13:02:11.529367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.826 [2024-12-14 13:02:11.529380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.529454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:11.826 [2024-12-14 13:02:11.529466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:11.826 [2024-12-14 13:02:11.529476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:11.826 [2024-12-14 13:02:11.529485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:11.826 [2024-12-14 13:02:11.529652] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 231.449 ms, result 0 00:36:12.771 00:36:12.771 00:36:12.771 13:02:12 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:14.686 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:14.686 13:02:14 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:36:14.686 13:02:14 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:36:14.686 13:02:14 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 85867 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 85867 ']' 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 85867 00:36:14.948 Process with pid 85867 is not found 00:36:14.948 Remove shared memory files 00:36:14.948 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85867) - No such process 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 85867 is not found' 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_a4449f4f-a8d5-417d-b7f9-195ffc128a42_band_md /dev/hugepages/ftl_a4449f4f-a8d5-417d-b7f9-195ffc128a42_l2p_l1 /dev/hugepages/ftl_a4449f4f-a8d5-417d-b7f9-195ffc128a42_l2p_l2 /dev/hugepages/ftl_a4449f4f-a8d5-417d-b7f9-195ffc128a42_l2p_l2_ctx /dev/hugepages/ftl_a4449f4f-a8d5-417d-b7f9-195ffc128a42_nvc_md /dev/hugepages/ftl_a4449f4f-a8d5-417d-b7f9-195ffc128a42_p2l_pool /dev/hugepages/ftl_a4449f4f-a8d5-417d-b7f9-195ffc128a42_sb /dev/hugepages/ftl_a4449f4f-a8d5-417d-b7f9-195ffc128a42_sb_shm /dev/hugepages/ftl_a4449f4f-a8d5-417d-b7f9-195ffc128a42_trim_bitmap /dev/hugepages/ftl_a4449f4f-a8d5-417d-b7f9-195ffc128a42_trim_log /dev/hugepages/ftl_a4449f4f-a8d5-417d-b7f9-195ffc128a42_trim_md /dev/hugepages/ftl_a4449f4f-a8d5-417d-b7f9-195ffc128a42_vmap 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:36:14.948 ************************************ 00:36:14.948 END TEST ftl_restore_fast 00:36:14.948 ************************************ 00:36:14.948 00:36:14.948 real 4m26.379s 00:36:14.948 user 4m14.424s 00:36:14.948 sys 0m11.807s 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:14.948 13:02:14 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:36:14.948 Process with pid 76759 is not found 00:36:14.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:14.948 13:02:14 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:14.948 13:02:14 ftl -- ftl/ftl.sh@14 -- # killprocess 76759 00:36:14.948 13:02:14 ftl -- common/autotest_common.sh@954 -- # '[' -z 76759 ']' 00:36:14.948 13:02:14 ftl -- common/autotest_common.sh@958 -- # kill -0 76759 00:36:14.948 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76759) - No such process 00:36:14.948 13:02:14 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76759 is not found' 00:36:14.948 13:02:14 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:14.948 13:02:14 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=88570 00:36:14.948 13:02:14 ftl -- ftl/ftl.sh@20 -- # waitforlisten 88570 00:36:14.948 13:02:14 ftl -- common/autotest_common.sh@835 -- # '[' -z 88570 ']' 00:36:14.948 13:02:14 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:14.948 13:02:14 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:14.948 13:02:14 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:14.948 13:02:14 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:14.948 13:02:14 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:14.948 13:02:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:14.948 [2024-12-14 13:02:14.621715] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:36:14.948 [2024-12-14 13:02:14.621966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88570 ] 00:36:15.209 [2024-12-14 13:02:14.776994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.209 [2024-12-14 13:02:14.884308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:16.160 13:02:15 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:16.160 13:02:15 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:16.160 13:02:15 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:16.471 nvme0n1 00:36:16.471 13:02:15 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:16.471 13:02:15 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:16.471 13:02:15 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:16.471 13:02:16 ftl -- ftl/common.sh@28 -- # stores=6e399793-4cb3-4b41-986d-7482a291e4bd 00:36:16.471 13:02:16 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:16.471 13:02:16 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6e399793-4cb3-4b41-986d-7482a291e4bd 00:36:16.742 13:02:16 ftl -- ftl/ftl.sh@23 -- # killprocess 88570 00:36:16.742 13:02:16 ftl -- common/autotest_common.sh@954 -- # '[' -z 88570 ']' 00:36:16.742 13:02:16 ftl -- common/autotest_common.sh@958 -- # kill -0 88570 00:36:16.742 13:02:16 ftl -- common/autotest_common.sh@959 -- # uname 00:36:16.742 13:02:16 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:16.742 13:02:16 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88570 00:36:16.742 killing process with pid 88570 00:36:16.742 13:02:16 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:16.742 13:02:16 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:16.742 13:02:16 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88570' 00:36:16.742 13:02:16 ftl -- common/autotest_common.sh@973 -- # kill 88570 00:36:16.742 13:02:16 ftl -- common/autotest_common.sh@978 -- # wait 88570 00:36:18.129 13:02:17 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:18.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:18.390 Waiting for block devices as requested 00:36:18.390 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:18.390 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:18.651 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:18.651 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:23.942 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:23.942 Remove shared memory files 00:36:23.942 13:02:23 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:23.942 13:02:23 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:23.942 13:02:23 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:23.942 13:02:23 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:23.942 13:02:23 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:23.942 13:02:23 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:23.942 13:02:23 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:23.942 ************************************ 00:36:23.942 END TEST ftl 00:36:23.942 ************************************ 00:36:23.942 00:36:23.942 real 18m3.275s 00:36:23.942 user 20m17.660s 00:36:23.942 sys 1m35.162s 00:36:23.942 13:02:23 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:23.942 13:02:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:23.942 13:02:23 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:23.942 13:02:23 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:23.942 13:02:23 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:23.942 13:02:23 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:23.942 13:02:23 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:23.942 13:02:23 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:23.942 13:02:23 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:23.942 13:02:23 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:23.942 13:02:23 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:23.942 13:02:23 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:23.942 13:02:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:23.942 13:02:23 -- common/autotest_common.sh@10 -- # set +x 00:36:23.942 13:02:23 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:23.942 13:02:23 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:23.942 13:02:23 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:23.942 13:02:23 -- common/autotest_common.sh@10 -- # set +x 00:36:25.328 INFO: APP EXITING 00:36:25.328 INFO: killing all VMs 00:36:25.328 INFO: killing vhost app 00:36:25.328 INFO: EXIT DONE 00:36:25.590 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:25.850 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:25.850 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:25.850 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:25.850 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:26.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:26.682 Cleaning 00:36:26.682 Removing: /var/run/dpdk/spdk0/config 00:36:26.682 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:26.682 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:26.682 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:26.682 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:26.682 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:26.682 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:26.682 Removing: /var/run/dpdk/spdk0 00:36:26.682 Removing: /var/run/dpdk/spdk_pid58704 00:36:26.682 Removing: /var/run/dpdk/spdk_pid58906 00:36:26.682 Removing: /var/run/dpdk/spdk_pid59113 00:36:26.682 Removing: /var/run/dpdk/spdk_pid59212 00:36:26.682 Removing: /var/run/dpdk/spdk_pid59246 00:36:26.682 Removing: /var/run/dpdk/spdk_pid59368 00:36:26.682 Removing: /var/run/dpdk/spdk_pid59386 00:36:26.682 Removing: /var/run/dpdk/spdk_pid59574 00:36:26.682 Removing: /var/run/dpdk/spdk_pid59661 00:36:26.682 Removing: /var/run/dpdk/spdk_pid59751 00:36:26.682 Removing: /var/run/dpdk/spdk_pid59857 00:36:26.682 Removing: /var/run/dpdk/spdk_pid59948 00:36:26.682 Removing: /var/run/dpdk/spdk_pid59988 00:36:26.682 Removing: /var/run/dpdk/spdk_pid60019 00:36:26.682 Removing: /var/run/dpdk/spdk_pid60095 00:36:26.682 Removing: /var/run/dpdk/spdk_pid60179 00:36:26.682 Removing: /var/run/dpdk/spdk_pid60609 00:36:26.682 Removing: /var/run/dpdk/spdk_pid60668 00:36:26.682 Removing: /var/run/dpdk/spdk_pid60720 00:36:26.682 Removing: /var/run/dpdk/spdk_pid60736 00:36:26.682 Removing: /var/run/dpdk/spdk_pid60827 00:36:26.682 Removing: /var/run/dpdk/spdk_pid60843 00:36:26.682 Removing: /var/run/dpdk/spdk_pid60934 00:36:26.682 Removing: /var/run/dpdk/spdk_pid60950 00:36:26.683 Removing: /var/run/dpdk/spdk_pid61003 00:36:26.683 Removing: /var/run/dpdk/spdk_pid61021 00:36:26.683 Removing: /var/run/dpdk/spdk_pid61074 00:36:26.683 Removing: /var/run/dpdk/spdk_pid61092 00:36:26.683 Removing: /var/run/dpdk/spdk_pid61247 00:36:26.683 Removing: /var/run/dpdk/spdk_pid61283 00:36:26.683 Removing: /var/run/dpdk/spdk_pid61367 00:36:26.683 Removing: /var/run/dpdk/spdk_pid61544 00:36:26.683 Removing: /var/run/dpdk/spdk_pid61627 00:36:26.683 Removing: /var/run/dpdk/spdk_pid61659 00:36:26.683 Removing: /var/run/dpdk/spdk_pid62086 00:36:26.683 Removing: /var/run/dpdk/spdk_pid62184 00:36:26.683 Removing: /var/run/dpdk/spdk_pid62299 00:36:26.683 Removing: /var/run/dpdk/spdk_pid62354 00:36:26.683 Removing: /var/run/dpdk/spdk_pid62374 00:36:26.683 Removing: /var/run/dpdk/spdk_pid62458 00:36:26.683 Removing: /var/run/dpdk/spdk_pid63078 00:36:26.683 Removing: /var/run/dpdk/spdk_pid63115 00:36:26.683 Removing: /var/run/dpdk/spdk_pid63575 00:36:26.683 Removing: /var/run/dpdk/spdk_pid63673 00:36:26.683 Removing: /var/run/dpdk/spdk_pid63782 00:36:26.683 Removing: /var/run/dpdk/spdk_pid63835 00:36:26.683 Removing: /var/run/dpdk/spdk_pid63855 00:36:26.683 Removing: /var/run/dpdk/spdk_pid63886 00:36:26.683 Removing: /var/run/dpdk/spdk_pid65744 00:36:26.683 Removing: /var/run/dpdk/spdk_pid65876 00:36:26.683 Removing: /var/run/dpdk/spdk_pid65885 00:36:26.683 Removing: /var/run/dpdk/spdk_pid65897 00:36:26.683 Removing: /var/run/dpdk/spdk_pid65939 00:36:26.683 Removing: /var/run/dpdk/spdk_pid65943 00:36:26.683 Removing: /var/run/dpdk/spdk_pid65955 00:36:26.945 Removing: /var/run/dpdk/spdk_pid66000 00:36:26.945 Removing: /var/run/dpdk/spdk_pid66004 00:36:26.945 Removing: /var/run/dpdk/spdk_pid66016 00:36:26.945 Removing: /var/run/dpdk/spdk_pid66061 00:36:26.945 Removing: /var/run/dpdk/spdk_pid66065 00:36:26.945 Removing: /var/run/dpdk/spdk_pid66077 00:36:26.945 Removing: /var/run/dpdk/spdk_pid67468 00:36:26.945 Removing: /var/run/dpdk/spdk_pid67565 00:36:26.945 Removing: /var/run/dpdk/spdk_pid68977 00:36:26.945 Removing: /var/run/dpdk/spdk_pid70745 00:36:26.945 Removing: /var/run/dpdk/spdk_pid70819 00:36:26.945 Removing: /var/run/dpdk/spdk_pid70900 00:36:26.945 Removing: /var/run/dpdk/spdk_pid71004 00:36:26.945 Removing: /var/run/dpdk/spdk_pid71096 00:36:26.945 Removing: /var/run/dpdk/spdk_pid71194 00:36:26.945 Removing: /var/run/dpdk/spdk_pid71262 00:36:26.945 Removing: /var/run/dpdk/spdk_pid71343 00:36:26.945 Removing: /var/run/dpdk/spdk_pid71447 00:36:26.945 Removing: /var/run/dpdk/spdk_pid71539 00:36:26.945 Removing: /var/run/dpdk/spdk_pid71640 00:36:26.945 Removing: /var/run/dpdk/spdk_pid71713 00:36:26.945 Removing: /var/run/dpdk/spdk_pid71784 00:36:26.945 Removing: /var/run/dpdk/spdk_pid71888 00:36:26.945 Removing: /var/run/dpdk/spdk_pid71985 00:36:26.945 Removing: /var/run/dpdk/spdk_pid72081 00:36:26.945 Removing: /var/run/dpdk/spdk_pid72155 00:36:26.945 Removing: /var/run/dpdk/spdk_pid72230 00:36:26.945 Removing: /var/run/dpdk/spdk_pid72330 00:36:26.945 Removing: /var/run/dpdk/spdk_pid72427 00:36:26.945 Removing: /var/run/dpdk/spdk_pid72523 00:36:26.945 Removing: /var/run/dpdk/spdk_pid72597 00:36:26.945 Removing: /var/run/dpdk/spdk_pid72672 00:36:26.945 Removing: /var/run/dpdk/spdk_pid72742 00:36:26.945 Removing: /var/run/dpdk/spdk_pid72822 00:36:26.945 Removing: /var/run/dpdk/spdk_pid72926 00:36:26.945 Removing: /var/run/dpdk/spdk_pid73011 00:36:26.945 Removing: /var/run/dpdk/spdk_pid73106 00:36:26.945 Removing: /var/run/dpdk/spdk_pid73180 00:36:26.945 Removing: /var/run/dpdk/spdk_pid73254 00:36:26.945 Removing: /var/run/dpdk/spdk_pid73323 00:36:26.945 Removing: /var/run/dpdk/spdk_pid73406 00:36:26.945 Removing: /var/run/dpdk/spdk_pid73509 00:36:26.945 Removing: /var/run/dpdk/spdk_pid73595 00:36:26.945 Removing: /var/run/dpdk/spdk_pid73742 00:36:26.945 Removing: /var/run/dpdk/spdk_pid74022 00:36:26.945 Removing: /var/run/dpdk/spdk_pid74064 00:36:26.945 Removing: /var/run/dpdk/spdk_pid74520 00:36:26.945 Removing: /var/run/dpdk/spdk_pid74704 00:36:26.945 Removing: /var/run/dpdk/spdk_pid74803 00:36:26.945 Removing: /var/run/dpdk/spdk_pid74907 00:36:26.945 Removing: /var/run/dpdk/spdk_pid74959 00:36:26.945 Removing: /var/run/dpdk/spdk_pid74983 00:36:26.945 Removing: /var/run/dpdk/spdk_pid75282 00:36:26.945 Removing: /var/run/dpdk/spdk_pid75344 00:36:26.945 Removing: /var/run/dpdk/spdk_pid75417 00:36:26.945 Removing: /var/run/dpdk/spdk_pid75815 00:36:26.945 Removing: /var/run/dpdk/spdk_pid75955 00:36:26.945 Removing: /var/run/dpdk/spdk_pid76759 00:36:26.945 Removing: /var/run/dpdk/spdk_pid76887 00:36:26.945 Removing: /var/run/dpdk/spdk_pid77056 00:36:26.945 Removing: /var/run/dpdk/spdk_pid77171 00:36:26.945 Removing: /var/run/dpdk/spdk_pid77470 00:36:26.945 Removing: /var/run/dpdk/spdk_pid77728 00:36:26.945 Removing: /var/run/dpdk/spdk_pid78080 00:36:26.945 Removing: /var/run/dpdk/spdk_pid78257 00:36:26.945 Removing: /var/run/dpdk/spdk_pid78438 00:36:26.945 Removing: /var/run/dpdk/spdk_pid78491 00:36:26.945 Removing: /var/run/dpdk/spdk_pid78683 00:36:26.945 Removing: /var/run/dpdk/spdk_pid78714 00:36:26.945 Removing: /var/run/dpdk/spdk_pid78768 00:36:26.945 Removing: /var/run/dpdk/spdk_pid79035 00:36:26.945 Removing: /var/run/dpdk/spdk_pid79266 00:36:26.945 Removing: /var/run/dpdk/spdk_pid79898 00:36:26.945 Removing: /var/run/dpdk/spdk_pid80622 00:36:26.945 Removing: /var/run/dpdk/spdk_pid81310 00:36:26.945 Removing: /var/run/dpdk/spdk_pid82080 00:36:26.945 Removing: /var/run/dpdk/spdk_pid82222 00:36:26.945 Removing: /var/run/dpdk/spdk_pid82300 00:36:26.945 Removing: /var/run/dpdk/spdk_pid82913 00:36:26.945 Removing: /var/run/dpdk/spdk_pid82968 00:36:26.945 Removing: /var/run/dpdk/spdk_pid83642 00:36:26.945 Removing: /var/run/dpdk/spdk_pid84057 00:36:26.945 Removing: /var/run/dpdk/spdk_pid84837 00:36:26.945 Removing: /var/run/dpdk/spdk_pid84965 00:36:26.945 Removing: /var/run/dpdk/spdk_pid85011 00:36:26.945 Removing: /var/run/dpdk/spdk_pid85065 00:36:26.945 Removing: /var/run/dpdk/spdk_pid85121 00:36:26.945 Removing: /var/run/dpdk/spdk_pid85185 00:36:26.945 Removing: /var/run/dpdk/spdk_pid85361 00:36:26.945 Removing: /var/run/dpdk/spdk_pid85454 00:36:26.945 Removing: /var/run/dpdk/spdk_pid85532 00:36:26.945 Removing: /var/run/dpdk/spdk_pid85591 00:36:26.945 Removing: /var/run/dpdk/spdk_pid85626 00:36:26.945 Removing: /var/run/dpdk/spdk_pid85687 00:36:26.945 Removing: /var/run/dpdk/spdk_pid85867 00:36:26.945 Removing: /var/run/dpdk/spdk_pid86106 00:36:26.945 Removing: /var/run/dpdk/spdk_pid86714 00:36:26.945 Removing: /var/run/dpdk/spdk_pid87352 00:36:26.945 Removing: /var/run/dpdk/spdk_pid87857 00:36:26.945 Removing: /var/run/dpdk/spdk_pid88570 00:36:26.945 Clean 00:36:27.207 13:02:26 -- common/autotest_common.sh@1453 -- # return 0 00:36:27.207 13:02:26 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:27.207 13:02:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:27.207 13:02:26 -- common/autotest_common.sh@10 -- # set +x 00:36:27.207 13:02:26 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:27.207 13:02:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:27.207 13:02:26 -- common/autotest_common.sh@10 -- # set +x 00:36:27.207 13:02:26 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:27.207 13:02:26 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:27.207 13:02:26 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:27.207 13:02:26 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:27.207 13:02:26 -- spdk/autotest.sh@398 -- # hostname 00:36:27.207 13:02:26 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:27.468 geninfo: WARNING: invalid characters removed from testname! 00:36:54.055 13:02:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:56.601 13:02:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:59.147 13:02:58 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:01.062 13:03:00 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:02.979 13:03:02 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:05.527 13:03:04 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:08.073 13:03:07 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:08.073 13:03:07 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:08.073 13:03:07 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:08.073 13:03:07 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:08.073 13:03:07 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:08.073 13:03:07 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:08.073 + [[ -n 5024 ]] 00:37:08.073 + sudo kill 5024 00:37:08.085 [Pipeline] } 00:37:08.100 [Pipeline] // timeout 00:37:08.105 [Pipeline] } 00:37:08.120 [Pipeline] // stage 00:37:08.125 [Pipeline] } 00:37:08.139 [Pipeline] // catchError 00:37:08.148 [Pipeline] stage 00:37:08.150 [Pipeline] { (Stop VM) 00:37:08.162 [Pipeline] sh 00:37:08.447 + vagrant halt 00:37:11.064 ==> default: Halting domain... 00:37:17.665 [Pipeline] sh 00:37:17.946 + vagrant destroy -f 00:37:20.494 ==> default: Removing domain... 00:37:21.079 [Pipeline] sh 00:37:21.365 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:21.376 [Pipeline] } 00:37:21.391 [Pipeline] // stage 00:37:21.396 [Pipeline] } 00:37:21.410 [Pipeline] // dir 00:37:21.415 [Pipeline] } 00:37:21.429 [Pipeline] // wrap 00:37:21.435 [Pipeline] } 00:37:21.447 [Pipeline] // catchError 00:37:21.457 [Pipeline] stage 00:37:21.459 [Pipeline] { (Epilogue) 00:37:21.471 [Pipeline] sh 00:37:21.755 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:27.047 [Pipeline] catchError 00:37:27.049 [Pipeline] { 00:37:27.063 [Pipeline] sh 00:37:27.348 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:27.348 Artifacts sizes are good 00:37:27.359 [Pipeline] } 00:37:27.372 [Pipeline] // catchError 00:37:27.382 [Pipeline] archiveArtifacts 00:37:27.390 Archiving artifacts 00:37:27.497 [Pipeline] cleanWs 00:37:27.509 [WS-CLEANUP] Deleting project workspace... 00:37:27.509 [WS-CLEANUP] Deferred wipeout is used... 00:37:27.515 [WS-CLEANUP] done 00:37:27.517 [Pipeline] } 00:37:27.531 [Pipeline] // stage 00:37:27.536 [Pipeline] } 00:37:27.549 [Pipeline] // node 00:37:27.554 [Pipeline] End of Pipeline 00:37:27.611 Finished: SUCCESS